首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Learning Algorithms for Keyphrase Extraction   总被引:20,自引:0,他引:20  
Many academic journals ask their authors to provide a list of about five to fifteen keywords, to appear on the first page of each article. Since these key words are often phrases of two or more words, we prefer to call them keyphrases. There is a wide variety of tasks for which keyphrases are useful, as we discuss in this paper. We approach the problem of automatically extracting keyphrases from text as a supervised learning task. We treat a document as a set of phrases, which the learning algorithm must learn to classify as positive or negative examples of keyphrases. Our first set of experiments applies the C4.5 decision tree induction algorithm to this learning task. We evaluate the performance of nine different configurations of C4.5. The second set of experiments applies the GenEx algorithm to the task. We developed the GenEx algorithm specifically for automatically extracting keyphrases from text. The experimental results support the claim that a custom-designed algorithm (GenEx), incorporating specialized procedural domain knowledge, can generate better keyphrases than a general-purpose algorithm (C4.5). Subjective human evaluation of the keyphrases generated by GenEx suggests that about 80% of the keyphrases are acceptable to human readers. This level of performance should be satisfactory for a wide variety of applications.  相似文献   

2.
Distributed memory information retrieval systems have been used as a means of managing the vast volume of documents in an information retrieval system, and to improve query response time. However, proper allocation of documents plays an important role in improving the performance of such systems. Maximising the amount of parallelism can be achieved by distributing the documents, while the inter-node communication cost is minimised by avoiding documents distribution. Unfortunately, these two factors contradict each other. Finding an optimal allocation satisfying the above objectives is referred to as distributed memory document allocation problem (DDAP), and it is an NP-Complete problem. Heuristic algorithms are usually employed to find an optimal solution to this problem. Genetic algorithm is one such algorithms. In this paper, a genetic algorithm is developed to find an optimal document allocation for DDAP. Several well-known network topologies are investigated to evaluate the performance of the algorithm. The approach relies on the fact that documents of an information retrieval system are clustered by some arbitrary method. The advantages of a clustered document approach specially in a distributed memory information retrieval system are well-known.Since genetic algorithms work with a set of candidate solutions, parallelisation based on a Single Instruction Multiple Data (SIMD) paradigm seems to be the natural way to obtain a speedup. Using this approach, the population of strings is distributed among the processing elements. Each string is processed independently. The performance gain comes from the parallel execution of the strings, and hence, it is heavily dependent on the population size. The approach is favoured for genetic algorithms' applications where the parameter set for a particular run is well-known in advance, and where such applications require a big population size to solve the problem. DDAP fits nicely into the above requirements. The aim of the parallelisation is two-fold: the first one is to speedup the allocation process in DDAP which usually consists of thousands of documents and has to use a big population size, and second, it can be seen as an attempt to port the genetic algorithm's processes into SIMD machines.  相似文献   

3.
This case study describes the efforts of librarians to integrate mobile devices, collaboration tools, and resources into a School of Medicine third-year pediatric clerkship. Additional class emphasis is on evidence-based searching and journal article evaluation and presentation. The class objectives ensure that students are comfortable with mobile devices and collaboration tools. Over the eight-year history of the course, student acceptance of the mobile devices used diminished as the devices aged, necessitating the evaluation and selection of new technologies. Collaboration tools and mobile applications employed in the course evolved to accommodate curriculum changes.  相似文献   

4.
This case study describes the efforts of librarians to integrate mobile devices, collaboration tools, and resources into a School of Medicine third-year pediatric clerkship. Additional class emphasis is on evidence-based searching and journal article evaluation and presentation. The class objectives ensure that students are comfortable with mobile devices and collaboration tools. Over the eight-year history of the course, student acceptance of the mobile devices used diminished as the devices aged, necessitating the evaluation and selection of new technologies. Collaboration tools and mobile applications employed in the course evolved to accommodate curriculum changes.  相似文献   

5.
Documents in digital and paper libraries may be arranged, based on their topics, in order to facilitate browsing. It may seem intuitively obvious that ordering documents by their subject should improve browsing performance; the results presented in this article suggest that ordering library materials by their Gray code values and through using links consistent with the small world model of document relationships is consistent with improving browsing performance. Below, library circulation data, including ordering with Library of Congress Classification numbers and Library of Congress Subject Headings, are used to provide information useful in generating user-centered document arrangements, as well as user-independent arrangements. Documents may be linearly arranged so they can be placed in a line by topic, such as on a library shelf, or in a list on a computer display. Crossover links, jumps between a document and another document to which it is not adjacent, can be used in library databases to allow additional paths that one might take when browsing. The improvement that is obtained with different combinations of document orderings and different crossovers is examined and applications suggested.  相似文献   

6.
This research presents the results of a project that investigated how students use a library developed mobile app to locate books in the library. The study employed a methodology of formative evaluation so that the development of the mobile app would be informed by user preferences for next generation wayfinding systems. A key finding is the importance of gathering ongoing user feedback for designing useful and used mobile academic library applications. Elements and data points to include in future mobile interfaces are discussed.  相似文献   

7.
In this paper we evaluate the application of data fusion or meta-search methods, combining different algorithms and XML elements, to content-oriented retrieval of XML structured data. The primary approach is the combination of a probabilistic methods using Logistic regression and the Okapi BM-25 algorithm for estimation of document relevance or XML element relevance, in conjunction with Boolean approaches for some query elements. In the evaluation we use the INEX XML test collection to examine the relative performance of individual algorithms and elements and compare these to the performance of the data fusion approaches.  相似文献   

8.
Probabilistic topic models have recently attracted much attention because of their successful applications in many text mining tasks such as retrieval, summarization, categorization, and clustering. Although many existing studies have reported promising performance of these topic models, none of the work has systematically investigated the task performance of topic models; as a result, some critical questions that may affect the performance of all applications of topic models are mostly unanswered, particularly how to choose between competing models, how multiple local maxima affect task performance, and how to set parameters in topic models. In this paper, we address these questions by conducting a systematic investigation of two representative probabilistic topic models, probabilistic latent semantic analysis (PLSA) and Latent Dirichlet Allocation (LDA), using three representative text mining tasks, including document clustering, text categorization, and ad-hoc retrieval. The analysis of our experimental results provides deeper understanding of topic models and many useful insights about how to optimize the performance of topic models for these typical tasks. The task-based evaluation framework is generalizable to other topic models in the family of either PLSA or LDA.  相似文献   

9.
传统的聚类算法直接用于文本聚类这一应用上,存在的突出问题就是传统的聚类算法只负责将对象进行聚类,不负责对聚类后生成的类簇进行概念描述和解释.标注文本集合聚类后生成的类簇被称为聚类描述问题.聚类描述可以帮助用户迅速确认生成的文档类别与其需求是否相关,它是文本聚类应用中一项重要并富有挑战性的任务.针对文本聚类结果可读性较弱问题,本文提出了一种增强聚类结果的可理解性与可读性的算法,即基于支持向量机的文本聚类结果描述算法.实验结果表明基于支持向量机的聚类描述算法所取得的效果要优于常规的聚类结果描述方法.  相似文献   

10.
Document clustering of scientific texts using citation contexts   总被引:3,自引:0,他引:3  
Document clustering has many important applications in the area of data mining and information retrieval. Many existing document clustering techniques use the “bag-of-words” model to represent the content of a document. However, this representation is only effective for grouping related documents when these documents share a large proportion of lexically equivalent terms. In other words, instances of synonymy between related documents are ignored, which can reduce the effectiveness of applications using a standard full-text document representation. To address this problem, we present a new approach for clustering scientific documents, based on the utilization of citation contexts. A citation context is essentially the text surrounding the reference markers used to refer to other scientific works. We hypothesize that citation contexts will provide relevant synonymous and related vocabulary which will help increase the effectiveness of the bag-of-words representation. In this paper, we investigate the power of these citation-specific word features, and compare them with the original document’s textual representation in a document clustering task on two collections of labeled scientific journal papers from two distinct domains: High Energy Physics and Genomics. We also compare these text-based clustering techniques with a link-based clustering algorithm which determines the similarity between documents based on the number of co-citations, that is in-links represented by citing documents and out-links represented by cited documents. Our experimental results indicate that the use of citation contexts, when combined with the vocabulary in the full-text of the document, is a promising alternative means of capturing critical topics covered by journal articles. More specifically, this document representation strategy when used by the clustering algorithm investigated in this paper, outperforms both the full-text clustering approach and the link-based clustering technique on both scientific journal datasets.  相似文献   

11.
The infrastructure of a typical search engine can be used to calculate and resolve persistent document identifiers: a string that can uniquely identify and locate a document on the Internet without reference to its original location (URL). Bookmarking a document using such an identifier allows its retrieval even if the document's URL, and, in many cases, its contents change. Web client applications can offer facilities for users to bookmark a page by reference to a search engine and the persistent identifier instead of the original URL. The identifiers are calculated using a global Internet term index; a document's unique identifier consists of a word or word combination that occurs uniquely in the specific document. We use a genetic algorithm to locate a minimal unique document identifier: the shortest word or word combination that will locate the document. We tested our approach by implementing tools for indexing a document collection, calculating the persistent identifiers, performing queries, and distributing the computation and storage load among many computers.  相似文献   

12.
文本分类是信息检索领域的重要应用之一,由于采用统一特征向量形式表示所有文档,导致针对每个文档的特征向量具有高维性和稀疏性,从而影响文档分类的性能和精度。为有效提升文本特征选择的准确度,本文首先提出基于信息增益的特征选择函数改进方法,提高特征选择的精度。KNN(K-Nearest Neighbor)算法是文本分类中广泛应用的算法,本文针对经典KNN计算量大、类别标定函数精度不高的问题,提出基于训练集裁剪的加权KNN算法。该算法通过对训练集进行裁剪提升了分类算法的计算效率,通过模糊集的隶属度函数提升分类算法的准确性。在公开数据上的实验结果及实验分析证明了算法的有效性。  相似文献   

13.
训练数据中的噪声数据对文本分类结果的精度会造成不良影响,本文提出了一种对噪声数据进行修正的快速算法.针对以前的算法,每次迭代只对一个文档进行修正,迭代次数与噪声数据数量相当,算法运行效率较低的问题,本文通过分析调整文档所属类别对评价指标的影响,提出依据模块度变化量判断噪声数据,一次迭代过程中可以对多个文档进行修正处理,从而提高算法效率.实验结果表明,本文所提算法能够更快地修正粗分类数据中的噪声,算法复杂度从以前算法的O(Tnm2)降低为O(Tnm).该算法可以用于对大数据量数据进行处理,实用价值更高.  相似文献   

14.
Vocabulary incompatibilities arise when the terms used to index a document collection are largely unknown, or at least not well-known to the users who eventually search the collection. No matter how comprehensive or well-structured the indexing vocabulary, it is of little use if it is not used effectively in query formulation. This paper demonstrates that techniques for mapping user queries into the controlled indexing vocabulary have the potential to radically improve document retrieval performance. We also show how the use of controlled indexing vocabulary can be employed to achieve performance gains for collection selection. Finally, we demonstrate the potential benefit of combining these two techniques in an interactive retrieval environment. Given a user query, our evaluation approach simulates the human user's choice of terms for query augmentation given a list of controlled vocabulary terms suggested by a system. This strategy lets us evaluate interactive strategies without the need for human subjects.  相似文献   

15.
关键词自动标引是一种识别有意义且具有代表性片段或词汇的自动化技术。关键词自动标引可以为自动摘要、自动分类、自动聚类、机器翻译等应用提供辅助作用。本文利用基于知网的词语语义相关度算法对词汇链的构建算法进行了改进,并结合词频和词的位置等统计信息,进行关键词的自动标引。实验证明,该方法可以有效的进行关键词的自动标引。  相似文献   

16.
In the field of scientometrics, impact indicators and ranking algorithms are frequently evaluated using unlabelled test data comprising relevant entities (e.g., papers, authors, or institutions) that are considered important. The rationale is that the higher some algorithm ranks these entities, the better its performance. To compute a performance score for an algorithm, an evaluation measure is required to translate the rank distribution of the relevant entities into a single-value performance score. Until recently, it was simply assumed that taking the average rank (of the relevant entities) is an appropriate evaluation measure when comparing ranking algorithms or fine-tuning algorithm parameters.With this paper we propose a framework for evaluating the evaluation measures themselves. Using this framework the following questions can now be answered: (1) which evaluation measure should be chosen for an experiment, and (2) given an evaluation measure and corresponding performance scores for the algorithms under investigation, how significant are the observed performance differences?Using two publication databases and four test data sets we demonstrate the functionality of the framework and analyse the stability and discriminative power of the most common information retrieval evaluation measures. We find that there is no clear winner and that the performance of the evaluation measures is highly dependent on the underlying data. Our results show that the average rank is indeed an adequate and stable measure. However, we also show that relatively large performance differences are required to confidently determine if one ranking algorithm is significantly superior to another. Lastly, we list alternative measures that also yield stable results and highlight measures that should not be used in this context.  相似文献   

17.
Searching online information resources using mobile devices is affected by small screens which can display only a fraction of ranked search results. In this paper we investigate whether the search effort can be reduced by means of a simple user feedback: for a screenful of search results the user is encouraged to indicate a single most relevant document. In our approach we exploit the fact that, for small display sizes and limited user actions, we can construct a user decision tree representing all possible outcomes of the user interaction with the system. Examining the trees we can compute an upper limit on relevance feedback performance. In this study we consider three standard feedback algorithms: Rocchio, Robertson/Sparck-Jones (RSJ) and a Bayesian algorithm. We evaluate them in conjunction with two strategies for presenting search results: a document ranking that attempts to maximize information gain from the user’s choices and the top-D ranked documents. Experimental results indicate that for RSJ feedback which involves an explicit feature selection policy, the greedy top-D display is more appropriate. For the other two algorithms, the exploratory display that maximizes information gain produces better results. We conducted a user study to compare the performance of the relevance feedback methods with real users and compare the results with the findings from the tree analysis. This comparison between the simulations and real user behaviour indicates that the Bayesian algorithm, coupled with the sampled display, is the most effective. Extended version of “Evaluating Relevance Feedback Algorithms for Searching on Small Displays, ” Vishwa Vinay, Ingemar J. Cox, Natasa Milic-Frayling, Ken Wood published in the proceedings of ECIR 2005, David E. Losada, Juan M. Fernández-Luna (Eds.), Springer 2005, ISBN 3-540-25295-9  相似文献   

18.
杨涔  邵波 《图书情报工作》2021,65(12):65-72
[目的/意义] 从用户体验的视角,构建一个融合影响服务质量、服务能力和服务满意指标的评价体系,试图引导移动端图书馆系统完善用户服务。[方法/过程] 结合已有研究文献以及相关的在线评价,与专家讨论后,选择四大类16个指标构建基于用户使用体验的移动端图书馆系统评价指标体系,然后采用层次分析法确定这些评价指标的权重。[结果/结论] 移动端图书馆系统应该增加电子数据库和多媒体资源供给量,以满足用户对数字信息资源的需求;移动端图书馆系统要进一步完善在线下载、在线阅读等功能,以帮助用户有效使用馆藏资源服务于其工作和学习;移动端图书馆系统要不断优化系统性能和人机交互界面,从而简化操作逻辑,提升用户的使用体验。  相似文献   

19.
Exploring criteria for successful query expansion in the genomic domain   总被引:1,自引:0,他引:1  
Query Expansion is commonly used in Information Retrieval to overcome vocabulary mismatch issues, such as synonymy between the original query terms and a relevant document. In general, query expansion experiments exhibit mixed results. Overall TREC Genomics Track results are also mixed; however, results from the top performing systems provide strong evidence supporting the need for expansion. In this paper, we examine the conditions necessary for optimal query expansion performance with respect to two system design issues: IR framework and knowledge source used for expansion. We present a query expansion framework that improves Okapi baseline passage MAP performance by 185%. Using this framework, we compare and contrast the effectiveness of a variety of biomedical knowledge sources used by TREC 2006 Genomics Track participants for expansion. Based on the outcome of these experiments, we discuss the success factors required for effective query expansion with respect to various sources of term expansion, such as corpus-based cooccurrence statistics, pseudo-relevance feedback methods, and domain-specific and domain-independent ontologies and databases. Our results show that choice of document ranking algorithm is the most important factor affecting retrieval performance on this dataset. In addition, when an appropriate ranking algorithm is used, we find that query expansion with domain-specific knowledge sources provides an equally substantive gain in performance over a baseline system.
Nicola StokesEmail: Email:
  相似文献   

20.
提出一个移动互联网环境下用于个性化信息服务的基于情境历史的移动用户偏好挖掘方法,并构建移动旅游信息推荐原型系统CAMTRS。实验结果显示:该方法能较好地获取移动互联网环境下用户的需求偏好,有助于改进个性化推荐系统的预测效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号