首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We introduce archetypal analysis as a tool to describe and categorize scientists. This approach identifies typical characteristics of extreme (‘archetypal’) values in a multivariate data set. These positive or negative contextual attributes can be allocated to each scientist under investigation. In our application, we use a sample of seven bibliometric indicators for 29,083 economists obtained from the RePEc database and identify six archetypes. These are mainly characterized by ratios of published work and citations. We discuss applications and limitations of this approach. Finally, we assign relative shares of the identified archetypes to each economist in our sample.  相似文献   

2.
We use data on economic, management and political science journals to produce quantitative estimates of (in)consistency of the evaluations based on six popular bibliometric indicators (impact factor, 5-year impact factor, immediacy index, article influence score, SNIP and SJR). We advocate a new approach to the aggregation of journal rankings. Since the rank aggregation is a multicriteria decision problem, ranking methods from social choice theory may solve it. We apply either a direct ranking method based on the majority rule (the Copeland rule, the Markovian method) or a sorting procedure based on a tournament solution, such as the uncovered set and the minimal externally stable set. We demonstrate that the aggregate rankings reduce the number of contradictions and represent the set of the single-indicator-based rankings better than any of the six rankings themselves.  相似文献   

3.
指出文献计量作为一种有效的评价手段,在生物医药领域,主要应用于学术期刊评价和科研绩效评价;传统的文献计量评价方法存在一些固有局限性,为此人们已作出许多创新和改进。分析讨论评价学术期刊的新模型和指标--渐进曲线模型和特征因子以及评价科研绩效的两种方法创新--多指标综合分析和基于社会网络的分析,并论述文献计量与经济社会因素的结合使用。从这些新型方法和指标的出现和应用可以看出,文献计量评价的发展呈现出借助数学模型和计算机手段,由单指标向多指标转换,结合复杂的社会网络特征和经济社会因素进行分析的大趋势。  相似文献   

4.
Because of the variations in citation behavior across research fields, appropriate standardization must be applied as part of any bibliometric analysis of the productivity of individual scientists and research organizations. Such standardization involves scaling by some factor that characterizes the distribution of the citations of articles from the same year and subject category. In this work we conduct an analysis of the sensitivity of researchers’ productivity rankings to the scaling factor chosen to standardize their citations. To do this we first prepare the productivity rankings for all researchers (more than 30,000) operating in the hard sciences in Italy, over the period 2004–2008. We then measure the shifts in rankings caused by adopting scaling factors other than the particular factor that seems more effective for comparing the impact of publications in different fields: the citation average of the distribution of cited-only publications.  相似文献   

5.
This paper analyzes several well-known bibliometric indices using an axiomatic approach. We concentrate on indices aiming at capturing the global impact of a scientific output and do not investigate indices aiming at capturing an average impact. Hence, the indices that we study are designed to evaluate authors or groups of authors but not journals. The bibliometric indices that are studied include classic ones such as the number of highly cited papers as well as more recent ones such as the h-index and the g-index. We give conditions that characterize these indices, up to the multiplication by a positive constant. We also study the bibliometric rankings that are induced by these indices. Hence, we provide a general framework for the comparison of bibliometric rankings and indices.  相似文献   

6.
The launch of Google Scholar Metrics as a tool for assessing scientific journals may be serious competition for Thomson Reuters' Journal Citation Reports, and for the Scopus‐powered Scimago Journal Rank. A review of these bibliometric journal evaluation products is performed. We compare their main characteristics from different approaches: coverage, indexing policies, search and visualization, bibliometric indicators, results analysis options, economic cost, and differences in their ranking of journals. Despite its shortcomings, Google Scholar Metrics is a helpful tool for authors and editors in identifying core journals. As an increasingly useful tool for ranking scientific journals, it may also challenge established journals products.  相似文献   

7.
This paper proposes an empirical analysis of several scientists based on their time regularity, defined as the ability of generating an active and stable research output over time, in terms of both quantity/publications and impact/citations. In particular, we empirically analyse three recent bibliometric tools to perform qualitative/quantitative evaluations under the new perspective of regularity. These tools are respectively (1) the PY/CY diagram, (2) the publication/citation Ferrers diagram and triad indicators, and (3) a year-by-year comparison of the scientists’ output (Borda's ranking). Results of the regularity analysis are then compared with those obtained under the classical perspective of overall production.The proposed evaluation tools can be applied to competitive examinations for research position/promotion, as complementary instruments to the commonly adopted bibliometric techniques.  相似文献   

8.
Scientific journals are ordered by their impact factor while countries, institutions or researchers can be ranked by their scientific production, impact or by other simple or composite indicators as in the case of university rankings. In this paper, the theoretical framework proposed in Criado, R., Garcia, E., Pedroche, F. & Romance, M. (2013). A new method for comparing rankings through complex networks: Model and analysis of competitiveness of major European soccer leagues. Chaos, 23, 043114 for football competitions is used as a starting point to define a general index describing the dynamics or its opposite, stability, of rankings. Some characteristics to study rankings, ranking dynamics measures and axioms for such indices are presented. Furthermore, the notion of volatility of elements in rankings is introduced. Our study includes rankings with ties, entrants and leavers. Finally, some worked out examples are shown.  相似文献   

9.
自然科学期刊自引对影响因子的"调控"   总被引:14,自引:0,他引:14  
李运景  侯汉清 《情报学报》2006,25(2):172-178
本文利用《中国科技期刊引证报告》,重新计算了其中几个学科的一些期刊除去自引后的影响因子,并对去除前和去除后的影响因子与期刊排名进行了对比,以考察期刊自引对影响因子和期刊排名的影响。调查发现目前个别期刊过度自引已经使期刊排名发生了失真。最后对如何遏制这种现象提出了一些建议。  相似文献   

10.
随着我国的科技管理部门越来越多地采用文献计量学指标进行科研绩效的量化评价,如何利用文献计量的新指标和新工具更好地为科技管理服务便成为一个值得深入探讨的话题。文章从科研管理人员的视角出发,将文献计量和可视化技术结合在一起,介绍并探讨了一系列文献计量的新方法和新工具。为科研管理人员全面、深入地了解机构科研现状,对比与国际平均基准的差距,洞悉合作模式的发展趋势,以及深入剖析高被引论文的国际影响力和学科渗透影响力提供了全方位的信息和工具支撑。  相似文献   

11.
The objective assessment of the prestige of an academic institution is a difficult and hotly debated task. In the last few years, different types of university rankings have been proposed to quantify it, yet the debate on what rankings are exactly measuring is enduring.To address the issue we have measured a quantitative and reliable proxy of the academic reputation of a given institution and compared our findings with well-established impact indicators and academic rankings. Specifically, we study citation patterns among universities in five different Web of Science Subject Categories and use the PageRank algorithm on the five resulting citation networks. The rationale behind our work is that scientific citations are driven by the reputation of the reference so that the PageRank algorithm is expected to yield a rank which reflects the reputation of an academic institution in a specific field. Given the volume of the data analysed, our findings are statistically sound and less prone to bias, than, for instance, ad–hoc surveys often employed by ranking bodies in order to attain similar outcomes. The approach proposed in our paper may contribute to enhance ranking methodologies, by reconciling the qualitative evaluation of academic prestige with its quantitative measurements via publication impact.  相似文献   

12.
Engagement on social networks is a complex concept, in which many interconnected, difficult-to-assess components interact. It is precisely this complexity which motivated this work, which proposes a composite index as a tool to measure engagement. Using TOPSIS, a multicriteria method that bases its ranking on minimizing the distance to the ideal point and maximizing the distance to the anti-ideal, a mix of indicators based on two approaches is used: the tweet approach and the follower approach. The former reflects engagement based on user production, and the latter measures engagement by popularity. This index was applied to a group of Social Media Influencers and a general ranking was obtained, as well as a ranking by each approach to measuring engagement. A comparison of the rankings generated by the different approaches shows the suitability and pertinence of both, as it is confirmed that they measure different aspects, and that both are needed to offer a holistic view of the engagement generated by a user on Twitter; this is a new finding compared to prior studies, which only focused on one approach or the other.  相似文献   

13.
We address the question how citation-based bibliometric indicators can best be normalized to ensure fair comparisons between publications from different scientific fields and different years. In a systematic large-scale empirical analysis, we compare a traditional normalization approach based on a field classification system with three source normalization approaches. We pay special attention to the selection of the publications included in the analysis. Publications in national scientific journals, popular scientific magazines, and trade magazines are not included. Unlike earlier studies, we use algorithmically constructed classification systems to evaluate the different normalization approaches. Our analysis shows that a source normalization approach based on the recently introduced idea of fractional citation counting does not perform well. Two other source normalization approaches generally outperform the classification-system-based normalization approach that we study. Our analysis therefore offers considerable support for the use of source-normalized bibliometric indicators.  相似文献   

14.
Accurate measurement of research productivity should take account of both the number of co-authors of every scientific work and of the different contributions of the individuals. For researchers in the life sciences, common practice is to indicate such contributions through position in the authors list. In this work, we measure the distortion introduced to bibliometric ranking lists for scientific productivity when the number of co-authors or their position in the list is ignored. The field of observation consists of all Italian university professors working in the life sciences, with scientific production examined over the period 2004–2008. The outcomes of the study lead to a recommendation against using indicators or evaluation methods that ignore the different authors’ contributions to the research results.  相似文献   

15.
While past research has shown that learning outcomes can be influenced by the amount of effort students invest during the learning process, there has been little research into this question for scenarios where people use search engines to learn. In fact, learning-related tasks represent a significant fraction of the time users spend using Web search, so methods for evaluating and optimizing search engines to maximize learning are likely to have broad impact. Thus, we introduce and evaluate a retrieval algorithm designed to maximize educational utility for a vocabulary learning task, in which users learn a set of important keywords for a given topic by reading representative documents on diverse aspects of the topic. Using a crowdsourced pilot study, we compare the learning outcomes of users across four conditions corresponding to rankings that optimize for different levels of keyword density. We find that adding keyword density to the retrieval objective gave significant learning gains on some topics, with higher levels of keyword density generally corresponding to more time spent reading per word, and stronger learning gains per word read. We conclude that our approach to optimizing search ranking for educational utility leads to retrieved document sets that ultimately may result in more efficient learning of important concepts.  相似文献   

16.
The paper compares the scientific profiles of 199 countries relative to 254 subject categories, based on the impact of knowledge produced in each category, measured by the bibliometric indicator known as Total Fractional Impact (TFI). TFI is calculated on the basis of publications indexed in Web of Science (here over years 2010-2019). The approach taken overcomes some criticalities occurring with indicators previously proposed for the same purpose. With this approach, it is possible to: i) produce, for any country, a scientific specialization profile in correspondence with each subject category; ii) identify distinctive or common characteristics of individual countries or clusters of countries. The approach provides a new tool which may reveal useful for formulation of research policies.  相似文献   

17.
As the volume of scientific articles has grown rapidly over the last decades, evaluating their impact becomes critical for tracing valuable and significant research output. Many studies have proposed various ranking methods to estimate the prestige of academic papers using bibliometric methods. However, the weight of the links in bibliometric networks has been rarely considered for article ranking in existing literature. Such incomplete investigation in bibliometric methods could lead to biased ranking results. Therefore, a novel scientific article ranking algorithm, W-Rank, is introduced in this study proposing a weighting scheme. The scheme assigns weight to the links of citation network and authorship network by measuring citation relevance and author contribution. Combining the weighted bibliometric networks and a propagation algorithm, W-Rank is able to obtain article ranking results that are more reasonable than existing PageRank-based methods. Experiments are conducted on both arXiv hep-th and Microsoft Academic Graph datasets to verify the W-Rank and compare it with three renowned article ranking algorithms. Experimental results prove that the proposed weighting scheme assists the W-Rank in obtaining ranking results of higher accuracy and, in certain perspectives, outperforming the other algorithms.  相似文献   

18.
The issue of funding for basic research is investigated. Public funding models for research organizations are presented. A new approach to the assessment of possible thematic areas based on bibliometric and webometric indicators is introduced. The architecture of the state system of scientific and technical information is discussed. Scientometric methods provide an instrument for resolving the infrastructure problems of the Russian science.  相似文献   

19.
学科评价之文献计量指标分析   总被引:8,自引:0,他引:8  
立足学科评价,重点在于对文献计量评价指标的研究。筛选国内外当前用于学科评价的文献计量指标,给出了一套科学常用的学科评价之文献计量指标参考集。设计指标分析过程和指标集成结构图,对指标参考集中指标进行功能分析和识别标志分析,试图发掘文献计量评价指标的内在特征和规律。  相似文献   

20.
学科排名对美国iSchool教师就业的影响   总被引:1,自引:0,他引:1  
[目的/意义] 探索学科排名对美国iSchool教师就业的影响,以发现学科建设现状,为我国图情档学科的师资建设提供参考。[方法/过程] 在教师就职流动性视角下,分析美国27个iSchool的880位教师的现任学校与毕业院校的学科排名前后变化。[结果/结论] 研究发现:学科排名越高的毕业生有越多的就业选择权和机会,大部分iSchool毕业生工作于比毕业学校排名低的学校,iSchool教师就业存在明显性别差异,男性比女性更容易就职于排名高的学校。我国建设"双一流"过程中,应该着手利用评价和排名给师资建设带来的积极影响,关注并改善负面影响。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号