首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Low-use tangible print collections represent a long-standing problem for academic libraries. Expanding on the previous research aimed at leveraging machine learning (ML) toward predicting patterns of collection use, this study explores the potential for adaptive boosting (AdaBoost) as a foundation for developing actionable predictive models of print title use. This study deploys the AdaBoost algorithm, with random forests used as the base classifier, via the adabag package for R. Methodological considerations associated with dataset congruence, as well as sample-based modeling versus novel data modeling, are explored in relation to four AdaBoost models that are trained and tested. Results of this study show AdaBoost as a promising ML solution for predictive modeling of print collections, with the central model of interest able to accurately predict use in over 85% of cases. This research also explores peripheral questions of interest related to general considerations when evaluating ML models, as well as the compatibility of similar models trained with e-book versus print book usage data.  相似文献   

2.
Genetic Approach to Query Space Exploration   总被引:2,自引:0,他引:2  
This paper describes a genetic algorithm approach for intelligent information retrieval. The goal is to find an optimal set of documents which best matches the user's needs by exploring and exploiting the document space. More precisely, we define a specific genetic algorithm for information retrieval based on knowledge based operators and guided by a heuristic for relevance multi-modality problem solving. Experiments with TREC-6 French data and queries show the effectiveness of our approach.  相似文献   

3.
Demand-driven acquisition (DDA) programs are playing an increasingly important role in academic libraries. However, the literature surrounding this topic illustrates the wide-ranging, and frequently unpredictable, results of DDA implementation. As uncertainty abounds, librarians continue to seek out deeper understandings of those processes driving the use and purchase of DDA materials. Implicit in this search is a desire to understand how local environmental factors and user preferences dictate broader collection use and purchasing patterns. A small number of these studies have sought deeper insights through predictive modeling, though success has been limited. Following this line of inquiry, this study explores how machine learning might enable more effective collection development and management strategies through the predictive modeling of complex collection use and purchasing patterns. This research describes a replicable implementation of an adaptive boosting (AdaBoost) model that predicts the likelihood of DDA titles being triggered for purchase. The predictive capacity of this model is compared against a more traditional logistic regression model. This study's results show that the AdaBoost model possesses much higher predictive capacity than a regression-based model informed by the same set of predictors. The AdaBoost algorithm, once trained with local DDA data, provides accurate predictions in 82% of cases.  相似文献   

4.
Mapping, a crucial problem to be solved for optimal utilization of multicomputers, is the problem of allocating the tasks of a parallel program to the processors of a multicomputer in a way that minimizes the total completion time. Unfortunately, this problem is known to be NP-hard and hence we cannot optimally solve it in a reasonable amount of computation time. In this paper, we present a new heuristic algorithm for solving the mapping problem. Our heuristic move-exchange (HME) algorithm works by iterating a sequence of task moves followed by a task exchange in an efficient way and tries to minimize the completion time by minimizing a cost function. This function is a weighted sum of the cost of load imbalance and the cost of all communications. We then present two competing algorithms, viz. the Simulated Annealing (SA) algorithm and the Recursive Mincut (RM) bipartitioning algorithm, and compare the performance of the HME algorithm with that of SA and RM algorithms. The results of our study clearly demonstrate that our HME algorithm is better than both SA and RM algorithms for solving the mapping problem.  相似文献   

5.
Operational multimodal information retrieval systems have to deal with increasingly complex document collections and queries that are composed of a large set of textual and non-textual modalities such as ratings, prices, timestamps, geographical coordinates, etc. The resulting combinatorial explosion of modality combinations makes it intractable to treat each modality individually and to obtain suitable training data. As a consequence, instead of finding and training new models for each individual modality or combination of modalities, it is crucial to establish unified models, and fuse their outputs in a robust way. Since the most popular weighting schemes for textual retrieval have in the past generalized well to many retrieval tasks, we demonstrate how they can be adapted to be used with non-textual modalities, which is a first step towards finding such a unified model. We demonstrate that the popular weighting scheme BM25 is suitable to be used for multimodal IR systems and analyze the underlying assumptions of the BM25 formula with respect to merging modalities under the so-called raw-score merging hypothesis, which requires no training. We establish a multimodal baseline for two multimodal test collections, show how modalities differ with respect to their contribution to relevance and the difficulty of treating modalities with overlapping information. Our experiments demonstrate that our multimodal baseline with no training achieves a significantly higher retrieval effectiveness than using just the textual modality for the social book search 2016 collection and lies in the range of a trained multimodal approach using the optimal linear combination of the modality scores.  相似文献   

6.
Word embeddings and convolutional neural networks (CNN) have attracted extensive attention in various classification tasks for Twitter, e.g. sentiment classification. However, the effect of the configuration used to generate the word embeddings on the classification performance has not been studied in the existing literature. In this paper, using a Twitter election classification task that aims to detect election-related tweets, we investigate the impact of the background dataset used to train the embedding models, as well as the parameters of the word embedding training process, namely the context window size, the dimensionality and the number of negative samples, on the attained classification performance. By comparing the classification results of word embedding models that have been trained using different background corpora (e.g. Wikipedia articles and Twitter microposts), we show that the background data should align with the Twitter classification dataset both in data type and time period to achieve significantly better performance compared to baselines such as SVM with TF-IDF. Moreover, by evaluating the results of word embedding models trained using various context window sizes and dimensionalities, we find that large context window and dimension sizes are preferable to improve the performance. However, the number of negative samples parameter does not significantly affect the performance of the CNN classifiers. Our experimental results also show that choosing the correct word embedding model for use with CNN leads to statistically significant improvements over various baselines such as random, SVM with TF-IDF and SVM with word embeddings. Finally, for out-of-vocabulary (OOV) words that are not available in the learned word embedding models, we show that a simple OOV strategy to randomly initialise the OOV words without any prior knowledge is sufficient to attain a good classification performance among the current OOV strategies (e.g. a random initialisation using statistics of the pre-trained word embedding models).  相似文献   

7.
This paper presents a method for assessing the quality of similarity functions. The scenario taken into account is that of approximate data matching, in which it is necessary to determine whether two data instances represent the same real world object. Our method is based on the semi-automatic estimation of optimal threshold values. We propose two methods for performing such estimation. The first method is an algorithm based on a reward function, and the second is a statistical method. Experiments were carried out to validate the techniques proposed. The results show that both methods for threshold estimation produce similar results. The output of such methods was used to design a grading function for similarity functions. This grading function, called discernability, was used to compare a number of similarity functions applied to an experimental data set.  相似文献   

8.
9.
In this paper, we consider the problem of finding an optimal schedule of several independent jobs in multiprocessor systems, i.e. for a given job list, in what order the jobs should be processed and how to assign these jobs to the processors in a multiprocessor system such that the completion time of the last completed job is minimized. We first discuss two models of this problem, one in which time and processor requirements of a job are fixed and the other in which time requirement of a jobs varies according to the number of processors allocated to the job for execution. We then present efficient algorithms for solving these problem models, which are known to be NP-hard in the strong sense, using a branch and bound strategy. Our algorithms use several new reduction rules to arrive at optimal solutions in an efficient way. Finally, we demonstrate the effectiveness of our algorithms by experimental results.  相似文献   

10.
[目的/意义] 为解决现有网页文本缺乏起源标注的问题,提出一种借助PROV本体发现相似网页文本起源关系的方法。[方法/过程] 通过聚类算法、自动语义标注和关联数据构建等技术的综合应用,结合PROV-POL溯源模型,检测网页文本实体的演变过程,实现文本级和属性级两级溯源方案。[结果/结论] 实验验证了借助语义网技术和数据溯源模型实现网页文本数据溯源的可行性,但实验过程中聚类算法的召回率有待提高。  相似文献   

11.
Prediction of the future performance of academic journals is a task that can benefit a variety of stakeholders including editorial staff, publishers, indexing services, researchers, university administrators and granting agencies. Using historical data on journal performance, this can be framed as a machine learning regression problem. In this work, we study two such regression tasks: 1) prediction of the number of citations a journal will receive during the next calendar year, and 2) prediction of the Elsevier CiteScore a journal will be assigned for the next calendar year. To address these tasks, we first create a dataset of historical bibliometric data for journals indexed in Scopus. We propose the use of neural network models trained on our dataset to predict the future performance of journals. To this end, we perform feature selection and model configuration for a Multi-Layer Perceptron and a Long Short-Term Memory. Through experimental comparisons to heuristic prediction baselines and classical machine learning models, we demonstrate superior performance in our proposed models for the prediction of future citation and CiteScore values.  相似文献   

12.
Greater collaboration generally produces higher category normalised citation impact (CNCI) and more influential science. Citation differences between domestic and international collaborative articles are known, but obscured in analyses of countries’ CNCIs, compromising evaluation insights. Here, we address this problem by deconstructing and distinguishing domestic and international collaboration types to explore differences in article citation rates between collaboration type and countries. Using Web of Science article data covering 2009–2018, we find that individual country citation and CNCI profiles vary significantly between collaboration types (e.g., domestic single institution and international bilateral) and credit counting methods (full and fractional). The ‘boosting’ effect of international collaboration is greatest where total research capacity is smallest, which could mislead interpretation of performance for policy and management purposes. By incorporating collaboration type into the CNCI calculation, we define a new metric labelled Collab-CNCI. This can account for collaboration effects without presuming credit (as fractional counting does). We recommend that analysts should: (1) partition all article datasets so that citation counts can be normalised by collaboration type (Collab-CNCI) to enable improved interpretation for research policy and management; and (2) consider filtering out smaller entities from multinational and multi-institutional analyses where their inclusion is likely to obscure interpretation.  相似文献   

13.
We present a non-greedy version of the recently published Principal Direction Divisive Partitioning (PDDP) algorithm. The PDDP algorithm creates a hierarchical taxonomy of a data set by successively splitting the data into sub-clusters. At each level the cluster with largest variance is split by a hyper-plane orthogonal to its leading principal component. The PDDP algorithm is known to produce high quality clusters, especially when applied to high dimensional data, such as document-word feature matrices. It also scales well with both the size and the dimensionality of the data set. However, at each level only the locally optimal choice of spitting is considered. At a later stage this often leads to a non-optimal global partitioning of the data. The non-greedy version of the PDDP algorithm (NGPDDP) presented in this paper address this problem. At each level multiple alternative splitting strategies are considered. Results from applying the algorithm to generated and real data (feature vectors from sets of text documents) are presented. The results show substantial improvements in the cluster quality.  相似文献   

14.
改进的中文字串多模式匹配算法   总被引:4,自引:0,他引:4  
针对中文字串匹配问题 ,提出了一种改进的多模式匹配算法。该算法采用新型组合状态自动机 ,解决了对大字符集语言构建字符完全Hash表时可能遇到的存储空间膨胀问题。此外 ,算法还充分利用中文大字符集语言的优势 ,将QS算法的思想融入到多模式匹配应用中 ,取得了良好的效果。实验结果显示 ,本算法明显优于DFSA算法 ,在平均情况下所花费时间仅为DFSA算法的 70 33%。  相似文献   

15.
In this paper, we study different applications of cross-language latent topic models trained on comparable corpora. The first focus lies on the task of cross-language information retrieval (CLIR). The Bilingual Latent Dirichlet allocation model (BiLDA) allows us to create an interlingual, language-independent representation of both queries and documents. We construct several BiLDA-based document models for CLIR, where no additional translation resources are used. The second focus lies on the methods for extracting translation candidates and semantically related words using only per-topic word distributions of the cross-language latent topic model. As the main contribution, we combine the two former steps, blending the evidences from the per-document topic distributions and the per-topic word distributions of the topic model with the knowledge from the extracted lexicon. We design and evaluate the novel evidence-rich statistical model for CLIR, and prove that such a model, which combines various (only internal) evidences, obtains the best scores for experiments performed on the standard test collections of the CLEF 2001–2003 campaigns. We confirm these findings in an alternative evaluation, where we automatically generate queries and perform the known-item search on a test subset of Wikipedia articles. The main importance of this work lies in the fact that we train translation resources from comparable document-aligned corpora and provide novel CLIR statistical models that exhaustively exploit as many cross-lingual clues as possible in the quest for better CLIR results, without use of any additional external resources such as parallel corpora or machine-readable dictionaries.  相似文献   

16.
In the field of scientometrics, impact indicators and ranking algorithms are frequently evaluated using unlabelled test data comprising relevant entities (e.g., papers, authors, or institutions) that are considered important. The rationale is that the higher some algorithm ranks these entities, the better its performance. To compute a performance score for an algorithm, an evaluation measure is required to translate the rank distribution of the relevant entities into a single-value performance score. Until recently, it was simply assumed that taking the average rank (of the relevant entities) is an appropriate evaluation measure when comparing ranking algorithms or fine-tuning algorithm parameters.With this paper we propose a framework for evaluating the evaluation measures themselves. Using this framework the following questions can now be answered: (1) which evaluation measure should be chosen for an experiment, and (2) given an evaluation measure and corresponding performance scores for the algorithms under investigation, how significant are the observed performance differences?Using two publication databases and four test data sets we demonstrate the functionality of the framework and analyse the stability and discriminative power of the most common information retrieval evaluation measures. We find that there is no clear winner and that the performance of the evaluation measures is highly dependent on the underlying data. Our results show that the average rank is indeed an adequate and stable measure. However, we also show that relatively large performance differences are required to confidently determine if one ranking algorithm is significantly superior to another. Lastly, we list alternative measures that also yield stable results and highlight measures that should not be used in this context.  相似文献   

17.
本文在分析用户网络浏览行为的基础上,从用户的专业知识经验出发设计了用以控制、引导网络蜘蛛行为的专家知识库,利用模糊规则推算法,在进行网页下载的同时对网页中的URL主题相关度进行预测的同时对相应的资源进行模糊规则分类.文章并以基础教育资源搜集为例对该算法进行了实现,通过对先后两个版本的系统性能的分析和比较,结果表明,使用模糊规则推理算法,进行URL相关度预测可以有效提高主题资源搜集的速度,采用二次分类的办法可以进一步提高资源分类的准确度,从而提高主题资源搜索系统的整体性能.  相似文献   

18.
Collaborative filtering is concerned with making recommendations about items to users. Most formulations of the problem are specifically designed for predicting user ratings, assuming past data of explicit user ratings is available. However, in practice we may only have implicit evidence of user preference; and furthermore, a better view of the task is of generating a top-N list of items that the user is most likely to like. In this regard, we argue that collaborative filtering can be directly cast as a relevance ranking problem. We begin with the classic Probability Ranking Principle of information retrieval, proposing a probabilistic item ranking framework. In the framework, we derive two different ranking models, showing that despite their common origin, different factorizations reflect two distinctive ways to approach item ranking. For the model estimations, we limit our discussions to implicit user preference data, and adopt an approximation method introduced in the classic text retrieval model (i.e. the Okapi BM25 formula) to effectively decouple frequency counts and presence/absence counts in the preference data. Furthermore, we extend the basic formula by proposing the Bayesian inference to estimate the probability of relevance (and non-relevance), which largely alleviates the data sparsity problem. Apart from a theoretical contribution, our experiments on real data sets demonstrate that the proposed methods perform significantly better than other strong baselines.
Marcel J. T. ReindersEmail:
  相似文献   

19.
Document clustering of scientific texts using citation contexts   总被引:3,自引:0,他引:3  
Document clustering has many important applications in the area of data mining and information retrieval. Many existing document clustering techniques use the “bag-of-words” model to represent the content of a document. However, this representation is only effective for grouping related documents when these documents share a large proportion of lexically equivalent terms. In other words, instances of synonymy between related documents are ignored, which can reduce the effectiveness of applications using a standard full-text document representation. To address this problem, we present a new approach for clustering scientific documents, based on the utilization of citation contexts. A citation context is essentially the text surrounding the reference markers used to refer to other scientific works. We hypothesize that citation contexts will provide relevant synonymous and related vocabulary which will help increase the effectiveness of the bag-of-words representation. In this paper, we investigate the power of these citation-specific word features, and compare them with the original document’s textual representation in a document clustering task on two collections of labeled scientific journal papers from two distinct domains: High Energy Physics and Genomics. We also compare these text-based clustering techniques with a link-based clustering algorithm which determines the similarity between documents based on the number of co-citations, that is in-links represented by citing documents and out-links represented by cited documents. Our experimental results indicate that the use of citation contexts, when combined with the vocabulary in the full-text of the document, is a promising alternative means of capturing critical topics covered by journal articles. More specifically, this document representation strategy when used by the clustering algorithm investigated in this paper, outperforms both the full-text clustering approach and the link-based clustering technique on both scientific journal datasets.  相似文献   

20.
Most of the fastest-growing string collections today are repetitive, that is, most of the constituent documents are similar to many others. As these collections keep growing, a key approach to handling them is to exploit their repetitiveness, which can reduce their space usage by orders of magnitude. We study the problem of indexing repetitive string collections in order to perform efficient document retrieval operations on them. Document retrieval problems are routinely solved by search engines on large natural language collections, but the techniques are less developed on generic string collections. The case of repetitive string collections is even less understood, and there are very few existing solutions. We develop two novel ideas, interleaved LCPs and precomputed document lists, that yield highly compressed indexes solving the problem of document listing (find all the documents where a string appears), top-k document retrieval (find the k documents where a string appears most often), and document counting (count the number of documents where a string appears). We also show that a classical data structure supporting the latter query becomes highly compressible on repetitive data. Finally, we show how the tools we developed can be combined to solve ranked conjunctive and disjunctive multi-term queries under the simple \({\textsf{tf}}{\textsf{-}}{\textsf{idf}}\) model of relevance. We thoroughly evaluate the resulting techniques in various real-life repetitiveness scenarios, and recommend the best choices for each case.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号