首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This study assesses whether eleven factors associate with higher impact research: individual, institutional and international collaboration; journal and reference impacts; abstract readability; reference and keyword totals; paper, abstract and title lengths. Authors may have some control over these factors and hence this information may help them to conduct and publish higher impact research. These factors have been previously researched but with partially conflicting findings. A simultaneous assessment of these eleven factors for Biology and Biochemistry, Chemistry and Social Sciences used a single negative binomial-logit hurdle model estimating the percentage change in the mean citation counts per unit of increase or decrease in the predictor variables. The journal Impact Factor was found to significantly associate with increased citations in all three areas. The impact and the number of cited references and their average citation impact also significantly associate with higher article citation impact. Individual and international teamwork give a citation advantage in Biology and Biochemistry and Chemistry but inter-institutional teamwork is not important in any of the three subject areas. Abstract readability is also not significant or of no practical significance. Among the article size features, abstract length significantly associates with increased citations but the number of keywords, title length and paper length are insignificant or of no practical significance. In summary, at least some aspects of collaboration, journal and document properties significantly associate with higher citations. The results provide new and particularly strong statistical evidence that the authors should consider publishing in high impact journals, ensure that they do not omit relevant references, engage in the widest possible team working, when appropriate, and write extensive abstracts. A new finding is that whilst is seems to be useful to collaborate and to collaborate internationally, there seems to be no particular need to collaborate with other institutions within the same country.  相似文献   

2.
3.
4.
It is well-known that the distribution of citations to articles in a journal is skewed. We ask whether journal rankings based on the impact factor are robust with respect to this fact. We exclude the most cited paper, the top 5 and 10 cited papers for 100 economics journals and recalculate the impact factor. Afterwards we compare the resulting rankings with the original ones from 2012. Our results show that the rankings are relatively robust. This holds both for the 2-year and the 5-year impact factor.  相似文献   

5.
The journal impact factor (JIF) is the average of the number of citations of the papers published in a journal, calculated according to a specific formula; it is extensively used for the evaluation of research and researchers. The method assumes that all papers in a journal have the same scientific merit, which is measured by the JIF of the publishing journal. This implies that the number of citations measures scientific merits but the JIF does not evaluate each individual paper by its own number of citations. Therefore, in the comparative evaluation of two papers, the use of the JIF implies a risk of failure, which occurs when a paper in the journal with the lower JIF is compared to another with fewer citations in the journal with the higher JIF. To quantify this risk of failure, this study calculates the failure probabilities, taking advantage of the lognormal distribution of citations. In two journals whose JIFs are ten-fold different, the failure probability is low. However, in most cases when two papers are compared, the JIFs of the journals are not so different. Then, the failure probability can be close to 0.5, which is equivalent to evaluating by coin flipping.  相似文献   

6.
In this paper, we propose a novel framework for a scholarly journal – a token‐curated registry (TCR). This model originates in the field of blockchain and cryptoeconomics and is essentially a decentralized system where tokens (digital currency) are used to incentivize quality curation of information. TCR is an automated way to create lists of any kind where decisions (whether to include N or not) are made through voting that brings benefit or loss to voters. In an academic journal, TCR could act as a tool to introduce community‐driven decisions on papers to be published, thus encouraging more active participation of authors/reviewers in editorial policy and elaborating the idea of a journal as a club. TCR could also provide a novel solution to the problems of editorial bias and the lack of rewards/incentives for reviewers. In the paper, we discuss core principles of TCR, its technological and cultural foundations, and finally analyse the risks and challenges it could bring to scholarly publishing.  相似文献   

7.
Medical publishing uses the skills of people from a wide range of backgrounds. In this study we set out to examine their attitudes and assess the degree of homogeneity. We gathered questionnaire responses from selected editors and medical reviewers and found that, on the whole, there was a homogeneous culture, though there were some significant differences. This has important implications for managers and trainers.  相似文献   

8.
9.
Most analyses of plagiarism focus on published content and do not report on the prevalence of plagiarism in submitted articles. Fears over large‐scale plagiarism, particularly in articles submitted by authors for whom English is a second language, have only been investigated in small publishing communities or using duplication‐checking analysis, which does not separate legitimate from unacceptable duplication. This research surveyed journal editors from around the world to ascertain recent (past year) experiences of plagiarized and/or duplicated submissions. We then compared their experiences to their assumptions about global levels of plagiarism. The survey received 372 responses, including 119 from Asian editors, 112 from European editors, and 57 from editors in North America. The respondents estimated that c.15% of all submissions contained plagiarized or duplicated content, although their own experiences were in the range of 2–5% of submissions. Of the respondents, 42% reported no incidence of plagiarized or duplicated submissions in the past year. Asian editors experienced the highest levels of plagiarized/duplicated content, although most of the problem articles were resolved, indicating that most of the identified duplication constituted relatively minor problems, rather than fraudulent plagiarism.  相似文献   

10.
11.
The digitization of journal content and its availability online has revolutionized journal publishing in recent years, resulting in both opportunities and challenges for traditional journal publishers. The explosion of data and the emergence of new players such as Google, new business models like Open Access, and new content consumers and producers, for example, China are significantly changing the face of journal publishing. It is not yet clear what the impact of these changes will be but by continuing to collaborate with our existing stakeholders and building partnerships with these newcomers, as well as by maintaining and promoting the quality of our content, we can ensure our future growth and success.  相似文献   

12.
13.
Research articles seem to have direct value for students in some subject areas, even though scholars may be their target audience. If this can be proven to be true, then subject areas with this type of educational impact could justify claims for enhanced funding. To seek evidence of disciplinary differences in the direct educational uptake of journal articles, but ignoring books, conference papers, and other scholarly outputs, this paper assesses the total number and proportions of student readers of academic articles in Mendeley across 12 different subjects. The results suggest that whilst few students read mathematics research articles, in other areas, the number of student readers is broadly proportional to the number of research readers. Although the differences in the average numbers of undergraduate readers of articles varies by up to 50 times between subjects, this could be explained by the differing levels of uptake of Mendeley rather than the differing educational value of disciplinary research. Overall, then, the results do not support the claim that journal articles in some areas have substantially more educational value than average for academia, compared with their research value.  相似文献   

14.
To evaluate peer review of author‐suggested reviewers (Ra), this research compared them with editor‐selected reviewers (Re) using 1‐year data collected from Journal of Systematics and Evolution. The results indicated that (1) Ra responded more positively than Re, that is, accepted invitations to review more often, more likely to suggest alternative reviewers, and less likely to neglect a review invitation; (2) there was no statistically significant difference in timeliness between Ra and Re; (3) editors rated Re reviews of higher quality than Ra reviews, but the word count length of these reviews did not differ statistically; (4) Ra made more favourable publication recommendations than Re; and (5) Ra were more often based in the country of the authors than Re, and this correlated with the location effect on reviewer response and publication recommendations. These results suggest that authors should be encouraged to suggest reviewers. However, in terms of policy or procedure based on the results of this study, journals/editors should collect and consult at least one review from other sources than author suggested, and when reviewers nominated by authors are considered, priority should be given to those with different locations from the authors.  相似文献   

15.
16.
Our objective was to perform a pilot study to estimate the proportion of published errata linked to randomized controlled trials (RCTs) that are worthwhile obtaining when doing a systematic review. medline was searched for records that had both 'randomized-controlled-trial' in the publication type field and 'erratum' in the comments field. One hundred records from four general medical journals were examined independently from two different perspectives. From the information specialist's perspective, 74% of the errata were considered worthwhile obtaining; these were mainly errors in tables or figures. Another 9% described less serious errors, but were worth obtaining if easily available. The other 17% were minor errors. From the perspective of the experienced reviewer/public health consultant, 5% of errata were classified as likely to affect a meta-analysis, and 10% as having significant errors that would affect the interpretation of the RCT, but no effect on a meta-analysis; 85% were not considered important enough to affect either. About 5% of errata to RCTs appeared to matter in terms of changing the final conclusions of a systematic review. However, the majority of errata were considered to be worthwhile obtaining, on the basis that having full and accurate data can reduce confusion and save reviewers time.  相似文献   

17.
The publication indicator of the Finnish research funding system is based on a manual ranking of scholarly publication channels. These ranks, which represent the evaluated quality of the channels, are continuously kept up to date and thoroughly reevaluated every four years by groups of nominated scholars belonging to different disciplinary panels. This expert-based decision-making process is informed by available citation-based metrics and other relevant metadata characterizing the publication channels. The purpose of this paper is to introduce various approaches that can explain the basis and evolution of the quality of publication channels, i.e., ranks. This is important for the academic community, whose research work is being governed using the system. Data-based models that, with sufficient accuracy, explain the level of or changes in ranks provide assistance to the panels in their multi-objective decision making, thus suggesting and supporting the need to use more cost-effective, automated ranking mechanisms. The analysis relies on novel advances in machine learning systems for classification and predictive analysis, with special emphasis on local and global feature importance techniques.  相似文献   

18.
19.
Do more distant collaborations have more citation impact?   总被引:1,自引:0,他引:1  
Internationally co-authored papers are known to have more citation impact than nationally co-authored paper, on average. However, the question of whether there are systematic differences between pairs of collaborating countries in terms of the citation impact of their joint output, has remained unanswered. On the basis of all scientific papers published in 2000 and co-authored by two or more European countries, we show that citation impact increases with the geographical distance between the collaborating counties.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号