首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
林正奎  刘丰军  赵娜 《情报科学》2019,37(6):170-177
【目的/意义】梳理国内外在线知识社区群体协作内在机制方面的研究,以期发现当前研究存在的问题和不足,为进一步深入研究提供参考和借鉴。【方法/过程】通过共现分析、多维尺度分析等文献计量学方法进行文献分析,从用户、内容和社区三个层面对该领域的研究进行论述。【结果/结论】将收集的文献按研究主题分为用户参与机制、用户互动机制、内容质量影响机制、社区运作机制和社区动力机制五个方面,评述现有研究的不足,从研究内容、视角和方法上提出未来可能的研究方向和思路。  相似文献   

2.
将维基百科分类网络与WordNet、同义词词林3个概念网络分别看作自组织知识系统网络与他组织知识系统网络的代表,并用复杂网络的方法对这3个网络进行比较分析。结果发现3个知识网络都具有小世界特性和无标度特性,并且在微观结构上具有较大的相似性,说明知识体系本身是有共性的。  相似文献   

3.
刘丰军  林正奎  赵娜 《科研管理》2019,40(3):153-162
基于社会认知理论,构建了在线知识社区协作冲突影响模型,探讨了知识异质性、群体分化、隐匿性、任务复杂性和协调机制对协作冲突的影响机制。以364个英文版Wikipedia条目为样本,采用层次回归分析进行了实证检验,结果表明:知识异质性和群体分化与协作冲突呈正向关系;隐匿性与协作冲突呈倒U型关系;任务复杂性正向调节知识异质性、隐匿性与协作冲突之间的关系;协调机制正向调节知识异质性与协作冲突之间的关系,负向调节群体分化与协作冲突之间的关系。  相似文献   

4.
PFIBF方法是一种基于Wikipedia链接关系建立关联词典的方法.该方法仅仅对Wikipedia中的概念进行分析,而忽略了出现在概念解释文档中的术语与概念间的关系.本文利用共现分析法提取在解释文档中出现的术语并将提取出的术语定义为PFIBF分析的对象,扩展了PFIBF法分析的范围,从而实现了对PFIBF法的改进.使用改进后的方法建立关联词典,与原PFIBF方法建立的词典比较,改进后的方法能在不改变准确率的情况下提高关联词典的术语数量和关联关系数量,完善关联词典.  相似文献   

5.
Controversy is a complex concept that has been attracting attention of scholars from diverse fields. In the era of Internet and social media, detecting controversy and controversial concepts by the means of automatic methods is especially important. Web searchers could be alerted when the contents they consume are controversial or when they attempt to acquire information on disputed topics. Presenting users with the indications and explanations of the controversy should offer them chance to see the “wider picture” rather than letting them obtain one-sided views. In this work we first introduce a formal model of controversy as the basis of computational approaches to detecting controversial concepts. Then we propose a classification based method for automatic detection of controversial articles and categories in Wikipedia. Next, we demonstrate how to use the obtained results for the estimation of the controversy level of search queries. The proposed method can be incorporated into search engines as a component responsible for detection of queries related to controversial topics. The method is independent of the search engine’s retrieval and search results recommendation algorithms, and is therefore unaffected by a possible filter bubble.Our approach can be also applied in Wikipedia or other knowledge bases for supporting the detection of controversy and content maintenance. Finally, we believe that our results could be useful for social science researchers for understanding the complex nature of controversy and in fostering their studies.  相似文献   

6.
    
Automatic text summarization has been an active field of research for many years. Several approaches have been proposed, ranging from simple position and word-frequency methods, to learning and graph based algorithms. The advent of human-generated knowledge bases like Wikipedia offer a further possibility in text summarization – they can be used to understand the input text in terms of salient concepts from the knowledge base. In this paper, we study a novel approach that leverages Wikipedia in conjunction with graph-based ranking. Our approach is to first construct a bipartite sentence–concept graph, and then rank the input sentences using iterative updates on this graph. We consider several models for the bipartite graph, and derive convergence properties under each model. Then, we take up personalized and query-focused summarization, where the sentence ranks additionally depend on user interests and queries, respectively. Finally, we present a Wikipedia-based multi-document summarization algorithm. An important feature of the proposed algorithms is that they enable real-time incremental summarization – users can first view an initial summary, and then request additional content if interested. We evaluate the performance of our proposed summarizer using the ROUGE metric, and the results show that leveraging Wikipedia can significantly improve summary quality. We also present results from a user study, which suggests that using incremental summarization can help in better understanding news articles.  相似文献   

7.
针对维基百科协同编辑的协同认知过程和机制研究相对较少,以及协同认知理论发展也不成熟的问题。本文系统介绍了系统认知过程模型,从认知冲突的角度解释了知识吸收和知识创新的过程。以Wikipedia中香港教育史条目为例,分析了维基百科的协同认知过程,深入认识其协同编辑过程,为研究维基百科的演化过程提供一个新的视角,也对虚拟环境下的协同认知过程提供研究基础。  相似文献   

8.
9.
维基百科与百度百科比较分析   总被引:2,自引:0,他引:2  
Wiki是一种集体创造知识的协作平台,其中最有名也最成功的是维基百科.而在中文网络中,百度百科成长非常迅速.本文通过分析运营机制、管理方式、激励机制、版权保护、审核机制等,对维基百科和百度百科之间的异同做出了探索性的研究.  相似文献   

10.
The recent prevalence of wiki applications has demonstrated that wikis have high potential in facilitating knowledge creation, sharing, integration, and utilization. As wikis are increasingly adopted in contexts like business, education, research, government, and the public, how to improve user contribution becomes an important concern for researchers and practitioners. In this research, we focus on the quality aspect of user contribution: contribution value. Building upon the critical mass theory and research on editing activities in wikis, this study investigates whether user interests and resources can increase contribution value for different types of users. We develop our research model and empirically test it using survey method and content analysis method in Wikipedia. The results demonstrate that (1) for users who emphasize substantive edits, depth of interests and depth of resources play more influential roles in affecting contribution value; and (2) for users who focus on non-substantive edits, breadth of interests and breadth of resources are more important in increasing contribution value. The findings suggest that contribution value develops in different patterns for two types of users. Implications for both theory and practice are discussed.  相似文献   

11.
利用WOS( Web of Science)和Wikipedia两种数据源,对大数据相关的内容进行词频统计、文本归类分析,得出两种数据源下大数据主题的共识和差异,并进一步梳理提炼出大数据领域的主题类别。共同的类别包括整体角度、技术层面、应用层面、实体和活动等,进一步细分的主题包括数据及数据源、大数据处理和分析技术、大数据系统和应用、国家地区以及企业的推动、社会和人的讨论、行业和学科变化等。最后论文还结合相关数据探讨了大数据领域的研究前沿。  相似文献   

12.
越来越多的研究者认识到维基百科是知识获取的有效知识源,然而维基百科站点内部现有的搜索引擎没有充分利用其丰富的语义信息,因此,本文对面向中文维基百科的检索模式进行了对比研究。实验表明,本文提出的基于分类体系的语义检索模式在检准率、检全率以及检索速度方面能取得更好的效果,让用户更充分地利用到中文维基百科这个大规模知识库。  相似文献   

13.
This paper examines several different approaches to exploiting structural information in semi-structured document categorization. The methods under consideration are designed for categorization of documents consisting of a collection of fields, or arbitrary tree-structured documents that can be adequately modeled with such a flat structure. The approaches range from trivial modifications of text modeling to more elaborate schemes, specifically tailored to structured documents. We combine these methods with three different text classification algorithms and evaluate their performance on four standard datasets containing different types of semi-structured documents. The best results were obtained with stacking, an approach in which predictions based on different structural components are combined by a meta classifier. A further improvement of this method is achieved by including the flat text model in the final prediction.  相似文献   

14.
    
This paper describes the development and testing of a novel Automatic Search Query Enhancement (ASQE) algorithm, the Wikipedia N Sub-state Algorithm (WNSSA), which utilises Wikipedia as the sole data source for prior knowledge. This algorithm is built upon the concept of iterative states and sub-states, harnessing the power of Wikipedia’s data set and link information to identify and utilise reoccurring terms to aid term selection and weighting during enhancement. This algorithm is designed to prevent query drift by making callbacks to the user’s original search intent by persisting the original query between internal states with additional selected enhancement terms. The developed algorithm has shown to improve both short and long queries by providing a better understanding of the query and available data. The proposed algorithm was compared against five existing ASQE algorithms that utilise Wikipedia as the sole data source, showing an average Mean Average Precision (MAP) improvement of 0.273 over the tested existing ASQE algorithms.  相似文献   

15.
16.
以维基百科为代表的网络合作创造了一种全新的知识生产方式,并从认识论上提出了新的问题和挑战,文章关注了这些问题并对集体合作的认识论研究展开深入的分析和思考。文章指出维基百科与科学的四种认识文化差异,认为维基百科知识可以被视为一种认识论研究中的集体陈词,这种集体陈词具有较强的可辩护性,维基百科并不能取代专家作用,但是却能够生成一种认识平均主义模型从而打破知识特权实现知识权利的转移和流动。  相似文献   

17.
In this article we examine contributions to Wikipedia through the prism of two divergent critical theorists: Jürgen Habermas and Mikhail Bakhtin. We show that, in slightly dissimilar ways, these theorists came to consider an “aesthetic for democracy” (Hirschkop 1999) or template for deliberative relationships that privileges relatively free and unconstrained dialogue to which every speaker has equal access and without authoritative closure. We employ Habermas's theory of “universal pragmatics” and Bakhtin's “dialogism” for analyses of contributions on Wikipedia for its entry on stem cells and transhumanism and show that the decision to embrace either unified or pluralistic forms of deliberation is an empirical matter to be judged in sociohistorical context, as opposed to what normative theories insist on. We conclude by stressing the need to be attuned to the complexity and ambiguity of deliberative relations online.  相似文献   

18.
Computing Semantic Similarity (SS) between concepts is one of the most critical issues in many domains such as Natural Language Processing and Artificial Intelligence. Over the years, several SS measurement methods have been proposed by exploiting different knowledge resources. Wikipedia provides a large domain-independent encyclopedic repository and a semantic network for computing SS between concepts. Traditional feature-based measures rely on linear combinations of different properties with two main limitations, the insufficient information and the loss of semantic information. In this paper, we propose several hybrid SS measurement approaches by using the Information Content (IC) and features of concepts, which avoid the limitations introduced above. Considering integrating discrete properties into one component, we present two models of semantic representation, called CORM and CARM. Then, we compute SS based on these models and take the IC of categories as a supplement of SS measurement. The evaluation, based on several widely used benchmarks and a benchmark developed by ourselves, sustains the intuitions with respect to human judgments. In summary, our approaches are more efficient in determining SS between concepts and have a better human correlation than previous methods such as Word2Vec and NASARI.  相似文献   

19.
Wikipedia is known as a free online encyclopedia. Wikipedia uses largely transparent writing and editing processes, which aim at providing the user with quality information through a democratic collaborative system. However, one aspect of these processes is not transparent—the identity of contributors, editors, and administrators. We argue that this particular lack of transparency jeopardizes the validity of the information being produced by Wikipedia. We analyze the social and ethical consequences of this lack of transparency in Wikipedia for all users, but especially students; we assess the corporate social performance issues involved, and we propose courses of action to compensate for the potential problems. We show that Wikipedia has the appearance, but not the reality, of responsible, transparent information production. This paper’s authors are the same as those who authored Wood, D. J. and Queiroz, A. 2008. Information versus. knowledge: Transparency and social responsibility issues for Wikipedia. In Antonino Vaccaro, Hugo Horta, and Peter Madsen (Eds.), Transparency, Information, and Communication Technology (pp. 261–283). Charlottesville, VA: Philosophy Documentation Center. Adele has changed her surname from Queiroz to Santana  相似文献   

20.
In this work, we present the first quality flaw prediction study for articles containing the two most frequent verifiability flaws in Spanish Wikipedia: articles which do not cite any references or sources at all (denominated Unreferenced) and articles that need additional citations for verification (so-called Refimprove). Based on the underlying characteristics of each flaw, different state-of-the-art approaches were evaluated. For articles not citing any references, a well-established rule-based approach was evaluated and interesting findings show that some of them suffer from Refimprove flaw instead. Likewise, for articles that need additional citations for verification, the well-known PU learning and one-class classification approaches were evaluated. Besides, new methods were compared and a new feature was also proposed to model this latter flaw. The results showed that new methods such as under-bagged decision trees with sum or majority voting rules, biased-SVM, and centroid-based balanced SVM, perform best in comparison with the ones previously published.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号