首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
在分析传统网络爬行器爬行算法的基础上,通过将隧道算法和网页页面分块技术相结合,指导专题爬行器进行爬行。通过4所高校门户网站的教育资源搜索实验表明,新的算法可以有效提高搜索效率。  相似文献   

2.
乔建忠 《图书情报工作》2013,57(14):114-120
针对主题爬行技术中的单一分类算法在面对多主题Web抓取和分类需求时泛化能力不强的局限,设计一种利用多种强分类算法形成的分类器组合,主题爬行器根据当前主题任务在线评估并为分类器排名,从中选择最优分类器分类的策略,并开展在多个主题抓取任务下的分类实验,比较每种分类算法的准确率和组合后的平均分类准确率以及对分类效率等评价指标的综合分析,结果证明该策略对领域局域性有所克服,普适性较强。  相似文献   

3.
基于主题爬虫的本体非分类关系学习框架   总被引:1,自引:0,他引:1  
乔建忠 《图书情报工作》2010,54(18):120-129
提出一种借助主题爬虫自动从返回的相关网页进行本体非分类关系学习的框架与方法。针对利用互联网进行本体学习的特点,所用到的主要方法是词频、共现统计和分割聚类算法KMeans,并没有采用复杂的语法结构分析和半指导聚类算法如EM、BIRCH和SOM,因此自动化程度和效率较高。学习结果将用于指导主题爬虫进行网页相关性的判断。这种非分类关系的学习质量将由主题爬虫在实际应用中的表现来客观评价。  相似文献   

4.
User queries to the Web tend to have more than one interpretation due to their ambiguity and other characteristics. How to diversify the ranking results to meet users’ various potential information needs has attracted considerable attention recently. This paper is aimed at mining the subtopics of a query either indirectly from the returned results of retrieval systems or directly from the query itself to diversify the search results. For the indirect subtopic mining approach, clustering the retrieval results and summarizing the content of clusters is investigated. In addition, labeling topic categories and concept tags on each returned document is explored. For the direct subtopic mining approach, several external resources, such as Wikipedia, Open Directory Project, search query logs, and the related search services of search engines, are consulted. Furthermore, we propose a diversified retrieval model to rank documents with respect to the mined subtopics for balancing relevance and diversity. Experiments are conducted on the ClueWeb09 dataset with the topics of the TREC09 and TREC10 Web Track diversity tasks. Experimental results show that the proposed subtopic-based diversification algorithm significantly outperforms the state-of-the-art models in the TREC09 and TREC10 Web Track diversity tasks. The best performance our proposed algorithm achieves is α-nDCG@5 0.307, IA-P@5 0.121, and α#-nDCG@5 0.214 on the TREC09, as well as α-nDCG@10 0.421, IA-P@10 0.201, and α#-nDCG@10 0.311 on the TREC10. The results conclude that the subtopic mining technique with the up-to-date users’ search query logs is the most effective way to generate the subtopics of a query, and the proposed subtopic-based diversification algorithm can select the documents covering various subtopics.  相似文献   

5.
主题爬行是专业搜索引擎的基础,爬行策略与爬行算法是主题爬行技术的核心,通过分析主题爬行的基本原理,对爬行策略与爬行算法进行分类比较,展示爬行策略与爬行算法的研究进展及当前研究热点,为主题爬行技术的进一步研究提供参考。  相似文献   

6.
融合主题与情感特征的突发事件微博舆情演化分析   总被引:1,自引:0,他引:1  
安璐  吴林 《图书情报工作》2017,61(15):120-129
[目的/意义]微博是突发事件网络舆情传播的重要媒介。面向突发事件的微博主题和情感挖掘对掌握突发事件的网络舆情、识别与预测潜在问题及风险等方面具有重要的实践意义。尝试提出一种融合主题与情感特征的突发事件微博舆情演化分析方法。[方法/过程]以寨卡事件为例,通过划分微博舆情演化的生命周期,基于word2vec技术分别提取该事件生命周期各阶段的微博主题,采用基于词典的情感分析方法,引入情感词、表情符号等多情绪源,对不同主题下的评论情感做细粒度划分,并计算情感强度,最终实现微博主题与情感的协同分析。[结果/结论]所提出的微博舆情演化分析方法能够揭示面向特定事件的微博在突发事件生命周期各阶段的主题特征、情感类型与强度,剖析网络舆情主题与情感特征的协同演化规律。  相似文献   

7.
基于ID3分类算法的深度网络爬虫设计   总被引:1,自引:0,他引:1  
针对目前Web信息挖掘中存在的信息覆盖率较低的问题,对网络爬虫系统进行研究,提出一种针对深度网络的、基于ID3分类算法的Web页面收集方法。对Web页面的特征进行分析、处理和分类,提取包含深度网页的表单,通过自动提交这些表单来进行更深和更广的页面获取,实验表明该方法可以有效减少现有搜索引擎的盲区,改善搜索结果。  相似文献   

8.
Measuring Search Engine Quality   总被引:12,自引:3,他引:9  
The effectiveness of twenty public search engines is evaluated using TREC-inspired methods and a set of 54 queries taken from real Web search logs. The World Wide Web is taken as the test collection and a combination of crawler and text retrieval system is evaluated. The engines are compared on a range of measures derivable from binary relevance judgments of the first seven live results returned. Statistical testing reveals a significant difference between engines and high intercorrelations between measures. Surprisingly, given the dynamic nature of the Web and the time elapsed, there is also a high correlation between results of this study and a previous study by Gordon and Pathak. For nearly all engines, there is a gradual decline in precision at increasing cutoff after some initial fluctuation. Performance of the engines as a group is found to be inferior to the group of participants in the TREC-8 Large Web task, although the best engines approach the median of those systems. Shortcomings of current Web search evaluation methodology are identified and recommendations are made for future improvements. In particular, the present study and its predecessors deal with queries which are assumed to derive from a need to find a selection of documents relevant to a topic. By contrast, real Web search reflects a range of other information need types which require different judging and different measures.  相似文献   

9.
A structured document retrieval (SDR) system aims to minimize the effort users spend to locate relevant information by retrieving parts of documents. To evaluate the range of SDR tasks, from element to passage to tree retrieval, numerous task-specific measures have been proposed. This has resulted in SDR evaluation measures that cannot easily be compared with respect to each other and across tasks. In previous work, we defined the SDR task of tree retrieval where passage and element are special cases. In this paper, we look in greater detail into tree retrieval to identify the main components of SDR evaluation: relevance, navigation, and redundancy. Our goal is to evaluate SDR within a single probabilistic framework based on these components. This framework, called Extended Structural Relevance (ESR), calculates user expected gain in relevant information depending on whether it is seen via hits (relevant results retrieved), unseen via misses (relevant results not retrieved), or possibly seen via near-misses (relevant results accessed via navigation). We use these expectations as parameters to formulate evaluation measures for tree retrieval. We then demonstrate how existing task-specific measures, if viewed as tree retrieval, can be formulated, computed and compared using our framework. Finally, we experimentally validate ESR across a range of SDR tasks.  相似文献   

10.
[目的/意义]从主题视角对环境科学领域的零被引论文进行分析,对比零被引论文与高被引论文在文章内容、外在指标方面的不同,揭示零被引论文存在的原因。[方法/过程]首先,对来自Web of Science数据库的国内环境科学领域的260篇高被引论文、907篇零被引论文的摘要进行PLDA主题识别,然后通过主题相似度计算发现主题间的关联,以主题热度作为内部指标,发文时间、发文期刊作为外部评价指标,最后,把论文主题内容与外部指标结合进行零被引与高被引论文之间的相同主题、不同主题对比分析。[结果/结论]在研究主题相同情况下,期刊的影响因子大小是影响零被引论文的主要因素;在主题不同的情况下,论文研究的主题内容是导致零被引论文的主要原因。  相似文献   

11.
This paper is concerned with Markov processes for computing page importance. Page importance is a key factor in Web search. Many algorithms such as PageRank and its variations have been proposed for computing the quantity in different scenarios, using different data sources, and with different assumptions. Then a question arises, as to whether these algorithms can be explained in a unified way, and whether there is a general guideline to design new algorithms for new scenarios. In order to answer these questions, we introduce a General Markov Framework in this paper. Under the framework, a Web Markov Skeleton Process is used to model the random walk conducted by the web surfer on a given graph. Page importance is then defined as the product of two factors: page reachability, the average possibility that the surfer arrives at the page, and page utility, the average value that the page gives to the surfer in a single visit. These two factors can be computed as the stationary probability distribution of the corresponding embedded Markov chain and the mean staying time on each page of the Web Markov Skeleton Process respectively. We show that this general framework can cover many existing algorithms including PageRank, TrustRank, and BrowseRank as its special cases. We also show that the framework can help us design new algorithms to handle more complex problems, by constructing graphs from new data sources, employing new family members of the Web Markov Skeleton Process, and using new methods to estimate these two factors. In particular, we demonstrate the use of the framework with the exploitation of a new process, named Mirror Semi-Markov Process. In the new process, the staying time on a page, as a random variable, is assumed to be dependent on both the current page and its inlink pages. Our experimental results on both the user browsing graph and the mobile web graph validate that the Mirror Semi-Markov Process is more effective than previous models in several tasks, even when there are web spams and when the assumption on preferential attachment does not hold.  相似文献   

12.
Research on cross-language information retrieval (CLIR) has typically been restricted to settings using binary relevance assessments. In this paper, we present evaluation results for dictionary-based CLIR using graded relevance assessments in a best match retrieval environment. A text database containing newspaper articles and a related set of 35 search topics were used in the tests. First, monolingual baseline queries were automatically formed from the topics. Secondly, source language topics (in English, German, and Swedish) were automatically translated into the target language (Finnish), using structured target queries. The effectiveness of the translated queries was compared to that of the monolingual queries. Thirdly, pseudo-relevance feedback was used to expand the original target queries. CLIR performance was evaluated using three relevance thresholds: stringent, regular, and liberal. When regular or liberal threshold was used, a reasonable performance was achieved. Using stringent threshold, equally high performance could not be achieved. On all the relevance thresholds the performance of the translated queries was successfully raised by pseudo-relevance feedback based query expansion. However, the performance of the stringent threshold in relation to the other thresholds could not be raised by this method.  相似文献   

13.
以英文同义术语为例,提出三种有效的自动获取互联网术语资源的技术手段,包括语法模式的自学习,在线同义词典的抽取,静态同义术语分类的爬取。在此基础上,设计并实现互联网同义术语检索原型系统(Web Synonym Searcher)。实验测试表明,从互联网中自动获取同义术语是一种非常有前景的途径。  相似文献   

14.
The combination of evidence can increase retrieval effectiveness. In this paper, we investigate the effectiveness of a decision mechanism for the selective combination of evidence for Web Information Retrieval and particularly for topic distillation. We introduce two measures of a query’s broadness and use them to select an appropriate combination of evidence for each query. The results from our experiments show that there is a statistically significant association between the output of the decision mechanism and the relative effectiveness of the different combinations of evidence. Moreover, we show that the proposed methodology can be applied in an operational setting, where relevance information is not available, by setting the decision mechanism’s thresholds automatically.  相似文献   

15.
Web信息主题采集技术研究   总被引:9,自引:0,他引:9  
李春旺 《图书情报工作》2005,49(4):77-80,70
简单介绍主题信息采集系统;从5个方面对其核心技术进行深入研究,包括种子页面生成、主题表示、相关度计算策略、爬行策略以及结束搜索策略等;详细讨论种子页面生成的人工方式、自动方式及混合方式,基于关键词的主题表示与基于Ontology的主题表示,多种相关度计算启发式策略比较,基本爬行策略与隧道技术以及结束爬行的多种情形等;在分析相关技术的算法、特点与应用情况的同时,针对主题信息采集特点提出相应的改进意见。  相似文献   

16.
The Internet has emerged as one of the most prevalent forms of communication media in and among public organizations. The construction and management of the World Wide Web (Web) sites are becoming essential elements of modern public administration. This study is intended to provide an in-depth evaluation on the Web sites of Taiwan’s central government based on the Web performance indicators provided by Nielsen (2000). Based on the Nielsen’s indicators, the authors carefully studied and coded each individual Web site of Taiwan central governmental agencies. The coding results indicate that the government Web sites in general have made many of the mistakes as predicted by Nielsen. Most of the agencies need to improve the coordination between Web designers and the line mangers of the agencies. In light of these research findings, this article provides a number of strategies to improve the Web design practices of Taiwanese public organizations that may also apply to public organizations in general.  相似文献   

17.
This article examines the literature that discusses technological applications for the collections and services of GPO depository libraries. It identifies areas treated in the literature and topics for further writing and research. The purpose is to present a collection of basic readings and a framework for viewing a growing body of literature.  相似文献   

18.
Probabilistic topic models have recently attracted much attention because of their successful applications in many text mining tasks such as retrieval, summarization, categorization, and clustering. Although many existing studies have reported promising performance of these topic models, none of the work has systematically investigated the task performance of topic models; as a result, some critical questions that may affect the performance of all applications of topic models are mostly unanswered, particularly how to choose between competing models, how multiple local maxima affect task performance, and how to set parameters in topic models. In this paper, we address these questions by conducting a systematic investigation of two representative probabilistic topic models, probabilistic latent semantic analysis (PLSA) and Latent Dirichlet Allocation (LDA), using three representative text mining tasks, including document clustering, text categorization, and ad-hoc retrieval. The analysis of our experimental results provides deeper understanding of topic models and many useful insights about how to optimize the performance of topic models for these typical tasks. The task-based evaluation framework is generalizable to other topic models in the family of either PLSA or LDA.  相似文献   

19.
关联规则兴趣度度量方法的比较研究   总被引:2,自引:1,他引:1  
关联规则挖掘是数据挖掘中重要的研究课题,已有许多有效的实现算法。然而,这些算法找到的关联规则数目太多,用户无法对其进行分析。为了克服这个问题,出现了一些关联规则衡量标准来分析规则的有趣性,在本文里我们在给出的实例上比较分析了一些关联规则客观兴趣度度量指标,提出了使用关联规则客观兴趣度度量指标的一些建议。  相似文献   

20.
基于RSS的博客采集系统的设计与实现*   总被引:1,自引:0,他引:1  
提出一种基于RSS的博客采集系统实现方案。设计两个爬虫,一个负责广度优先遍历互联网,获取每个用户对应的RSS地址;另一个负责对每个RSS地址垂直搜索,跟踪检测是否有更新的博客文章,以增量方式将更新文章装入数据库。并为算法设计实现一个模型系统。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号