首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 890 毫秒
1.
李慧 《现代情报》2015,35(4):172-177
词语相似度计算方法在信息检索、词义消歧、机器翻译等自然语言处理领域有着广泛的应用。现有的词语相似度算法主要分为基于统计和基于语义资源两类方法,前者是从大规模的语料中统计与词语共现的上下文信息以计算其相似度,而后者利用人工构建的语义词典或语义网络计算相似度。本文比较分析了两类词语相似度算法,重点介绍了基于Web语料库和基于维基百科的算法,并总结了各自的特点和不足之处。最后提出,在信息技术的影响下,基于维基百科和基于混合技术的词语相似度算法以及关联数据驱动的相似性计算具有潜在的发展趋势。  相似文献   

2.
文章提出一种改进的关联规则方法,用于抽取文本中的非分类关系。首先利用基于上下文的术语相似度获取方法得到术语间的相似度权重,再通过加入谓语动词的关联规则算法计算,结合搜索引擎技术得到候选关系对集合,并通过置信度和支持度的对比分析,抽取最终的非分类关系结果,最后对测试数据进行实验,并对结果进行分析。  相似文献   

3.
针对传统检索模型局限于语法层次上关键词匹配的特点,以领域本体为知识组织方式,提出了一种基于领域本体的语义检索模型,同时给出了该模型中的查询语义扩展算法和相似度计算算法。  相似文献   

4.
三维模型检索算法种类繁多,基于内容的三维模型检索算法是目前研究的主要领域。基于内容的检索方法首先计算三维模型数据库中的所有模型和待检索模型的特征描述符,然后通过比较待检索模型与数据库中所有模型的特征描述符的相似度,获取与待检索模型最相似的模型,实现对三维模型的检索:  相似文献   

5.
用户当前正在浏览的网页内容有助于说明用户的即时信息需求.在现有相关研究的基础上提出了一种基于上下文的Web即时信息检索方法,该方法允许用户从正在浏览的网页中选择一段文本作为原始检索条件,由检索系统从其上下文中提取一级扩展词和二级扩展词来形成新的检索条件进行检索,并将检索结果按相似度从大到小的顺序呈现给用户.  相似文献   

6.
基于内容的图像检索(CBIR,Content—based Image Retrieval)技术是图像领域研究的热点问题之一。介绍了图像检索系统相关算法的基本原理,采用的是基于改进的颜色直方图的算法,结合欧氏距离算法来进行图像处理和计算。选用Visual C++开发工具结合CxImage类库实现图像检索系统。用户可以选择关键图和图片库,之后系统就对关键图和图像库进行特征提取,将关键图与图片库的每一张图片相应特征进行对比,并计算关键图与图像库中每幅图片的相似度,最后按指定相似度大小输出检索结果显示给用户。  相似文献   

7.
提出一种新颖的基于特征融合的灰度图像检索算法,该算法将图像按一定步长量化并映射为n阶频率矩阵,然后融合矩阵第一、第二奇异值向量的信息得到图像复特征向量,最后以余弦相似度作为图像检索的相似度度量.实验数据分析表明,算法在检索性能上优于传统的颜色直方图法.  相似文献   

8.
【目的/意义】通过概念层次关系自动抽取可以快速地在大数据集上进行细粒度的概念语义层次自动划分, 为后续领域本体的精细化构建提供参考。【方法/过程】首先,在由复合术语和关键词组成的术语集上,通过词频、篇 章频率和语义相似度进行筛选,得到学术论文评价领域概念集;其次,考虑概念共现关系和上下文语义信息,前者 用文献-概念矩阵和概念共现矩阵表达,后者用word2vec词向量表示,通过余弦相似度进行集成,得到概念相似度 矩阵;最后,以关联度最大的概念为聚类中心,利用谱聚类对相似度矩阵进行聚类,得到学术论文评价领域概念层 次体系。【结果/结论】经实验验证,本研究提出的模型有较高的准确率,构建的领域概念层次结构合理。【创新/局限】 本文提出了一种基于词共现与词向量的概念层次关系自动抽取模型,可以实现概念层次关系的自动抽取,但类标 签确定的方法比较简单,可以进一步探究。  相似文献   

9.
在路由冲突协议下难以实现对语义检索任务的嵌入式调度,在路由冲突协议设计和网络协议识别中,由于语义检索码在链路负载导致网络通信效率低下,为了提高语义检索任务的调度能力,避免路由冲突,提出一种基于语义相似度融合的Linux嵌入式任务调度算法。通过语义相似度特征模型构建,易于实现语义检索的嵌入式任务调度和路由信息分流。对每组语义相似度特征进行特征融合,得到Linux嵌入式分流矩阵向量长度,进行特征分解,得到样本协方差,实现算法改进。仿真结果得出,算法具有较高的吞吐率和召回率,执行效率较高,检索精度优越,有效提高了语义检索嵌入式任务调度的运行效率。在语义系统构建和检索优化设计中具有较好的应用前景。  相似文献   

10.
多媒体内容检索技术   总被引:2,自引:1,他引:2  
谌群芳 《情报杂志》2003,22(6):54-55,58
同传统的文字信息相比,多媒体信息具有信息量大、难以准确描述的特点。基于内容的信息检索是一种相似度的检索,本文以图像检索为例,给出了检索的方法和算法。  相似文献   

11.
提出了一种新的相似视频快速检索方法.根据视频的时空分布统计得到图像特征码和视频单元,通过统计视频单元数量度量视频相似性.为了适应可扩展计算的需要,提出了基于聚类索引表的检索方法.通过对大规模数据库的查询测试证明该相似性检索算法快速有效.  相似文献   

12.
在文本检索过程中充分利用词语之间的上下文关系有助于提高检索性能.首先对已有的相关工作进行综述;然后针对已有研究对词语上下文关系应用不足的现状,提出一种基于词语上下文关系的文本检索算法;最后通过实验对该算法进行验证.  相似文献   

13.
Unknown words such as proper nouns, abbreviations, and acronyms are a major obstacle in text processing. Abbreviations, in particular, are difficult to read/process because they are often domain specific. In this paper, we propose a method for automatic expansion of abbreviations by using context and character information. In previous studies dictionaries were used to search for abbreviation expansion candidates (candidates words for original form of abbreviations) to expand abbreviations. We use a corpus with few abbreviations from the same field instead of a dictionary. We calculate the adequacy of abbreviation expansion candidates based on the similarity between the context of the target abbreviation and that of its expansion candidate. The similarity is calculated using a vector space model in which each vector element consists of words surrounding the target abbreviation and those of its expansion candidate. Experiments using approximately 10,000 documents in the field of aviation showed that the accuracy of the proposed method is 10% higher than that of previously developed methods.  相似文献   

14.
基于改进特征提取及聚类的网络评论挖掘研究   总被引:1,自引:0,他引:1  
[目的/意义]针对信息过载条件下中文网络产品评论中特征提取性能低以及特征聚类中初始中心点的选取问题。[方法/过程]本研究提出采用基于权重的改进Apriori算法产生候选产品特征集合,再根据独立支持度、频繁项名词非特征规则及基于网络搜索引擎的PMI算法对候选产品特征集合进行过滤。并以基于HowNet的语义相似度和特征观点共现作为衡量产品特征之间关联程度的特征,提出一种改进K-means聚类算法对产品特征进行聚类。[结果/结论]实验结果表明,在特征提取阶段,查准率为69%,查全率为92.64%,综合值达到79.07%。在特征聚类阶段,本文提出的改进K-means算法相对传统算法具有更优的挖掘性能。  相似文献   

15.
This study proposes a novel extended co-citation search technique, which is graph-based document retrieval on a co-citation network containing citation context information. The proposed search expands the scope of the target documents by repetitively spreading the relationship of co-citation in order to obtain relevant documents that are not identified by traditional co-citation searches. Specifically, this search technique is a combination of (a) applying a graph-based algorithm to compute the similarity score on a complicated network, and (b) incorporating co-citation contexts into the process of calculating similarity scores to reduce the negative effects of an increasing number of irrelevant documents. To evaluate the search performance of the proposed search, 10 proposed methods (five representative graph-based algorithms applied to co-citation networks weighted with/without contexts) are compared with two kinds of baselines (a traditional co-citation search with/without contexts) in information retrieval experiments based on two test collections (biomedicine and computer linguistic articles). The experiment results showed that the scores of the normalized discounted cumulative gain ([email protected]) of the proposed methods using co-citation contexts tended to be higher than those of the baselines. In addition, the combination of the random walk with restart (RWR) algorithm and the network weighted with contexts achieved the best search performance among the 10 proposed methods. Thus, it is clarified that the combination of graph-based algorithms and co-citation contexts are effective in improving the performance of co-citation search techniques, and that sole use of a graph-based algorithm is not enough to enhance search performances from the baselines.  相似文献   

16.
Word sense disambiguation (WSD) is meant to assign the most appropriate sense to a polysemous word according to its context. We present a method for automatic WSD using only two resources: a raw text corpus and a machine-readable dictionary (MRD). The system learns the similarity matrix between word pairs from the unlabeled corpus, and it uses the vector representations of sense definitions from MRD, which are derived based on the similarity matrix. In order to disambiguate all occurrences of polysemous words in a sentence, the system separately constructs the acyclic weighted digraph (AWD) for every occurrence of polysemous words in a sentence. The AWD is structured based on consideration of the senses of context words which occur with a target word in a sentence. After building the AWD per each polysemous word, we can search the optimal path of the AWD using the Viterbi algorithm. We assign the most appropriate sense to the target word in sentences with the sense on the optimal path in the AWD. By experiments, our system shows 76.4% accuracy for the semantically ambiguous Korean words.  相似文献   

17.
Word sense disambiguation (WSD) is meant to assign the most appropriate sense to a polysemous word according to its context. We present a method for automatic WSD using only two resources: a raw text corpus and a machine-readable dictionary (MRD). The system learns the similarity matrix between word pairs from the unlabeled corpus, and it uses the vector representations of sense definitions from MRD, which are derived based on the similarity matrix. In order to disambiguate all occurrences of polysemous words in a sentence, the system separately constructs the acyclic weighted digraph (AWD) for every occurrence of polysemous words in a sentence. The AWD is structured based on consideration of the senses of context words which occur with a target word in a sentence. After building the AWD per each polysemous word, we can search the optimal path of the AWD using the Viterbi algorithm. We assign the most appropriate sense to the target word in sentences with the sense on the optimal path in the AWD. By experiments, our system shows 76.4% accuracy for the semantically ambiguous Korean words.  相似文献   

18.
In this paper, we present a comparison of collocation-based similarity measures: Jaccard, Dice and Cosine similarity measures for the proper selection of additional search terms in query expansion. In addition, we consider two more similarity measures: average conditional probability (ACP) and normalized mutual information (NMI). ACP is the mean value of two conditional probabilities between a query term and an additional search term. NMI is a normalized value of the two terms' mutual information. All these similarity measures are the functions of any two terms' frequencies and the collocation frequency, but are different in the methods of measurement. The selected measure changes the order of additional search terms and their weights, hence has a strong influence on the retrieval performance. In our experiments of query expansion using these five similarity measures, the additional search terms of Jaccard, Dice and Cosine similarity measures include more frequent terms with lower similarity values than ACP or NMI. In overall assessments of query expansion, the Jaccard, Dice and Cosine similarity measures are better than ACP and NMI in terms of retrieval effectiveness, whereas, NMI and ACP are better in terms of execution efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号