首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
在路由冲突协议下难以实现对语义检索任务的嵌入式调度,在路由冲突协议设计和网络协议识别中,由于语义检索码在链路负载导致网络通信效率低下,为了提高语义检索任务的调度能力,避免路由冲突,提出一种基于语义相似度融合的Linux嵌入式任务调度算法。通过语义相似度特征模型构建,易于实现语义检索的嵌入式任务调度和路由信息分流。对每组语义相似度特征进行特征融合,得到Linux嵌入式分流矩阵向量长度,进行特征分解,得到样本协方差,实现算法改进。仿真结果得出,算法具有较高的吞吐率和召回率,执行效率较高,检索精度优越,有效提高了语义检索嵌入式任务调度的运行效率。在语义系统构建和检索优化设计中具有较好的应用前景。  相似文献   

2.
提出一种新颖的基于特征融合的灰度图像检索算法,该算法将图像按一定步长量化并映射为n阶频率矩阵,然后融合矩阵第一、第二奇异值向量的信息得到图像复特征向量,最后以余弦相似度作为图像检索的相似度度量.实验数据分析表明,算法在检索性能上优于传统的颜色直方图法.  相似文献   

3.
刘川 《科技风》2012,(1):95+97
基于GPS的动态载体定姿算法具有重要意义。其关键是整周模糊度的快速求解,但采用直接收敛法求解双差整周模糊度的初始化时间过长,在实际中难以应用。LAMBDA算法通过模糊度浮点解及其方差协方差矩阵可以有效地估计整周模糊度,但动态情况下无法直接求得浮点解的方差协方差矩阵。本文提出一种估计方差协方差矩阵的方法,使得LAMBDA算法能够有效地应用到动态姿态测量中。实际算例表明,该算法可将初始化时间缩短到原来的1/2左右,能够高效地用于实时动态姿态解算。  相似文献   

4.
针对现有直觉模糊集聚类方法存在计算量大、数据失真和易陷于局部最优等问题,提出基于新直觉模糊相似度量的直觉模糊谱聚类算法。首先定义了新的直觉模糊相似度量方法,然后基于该方法构造了直觉模糊相似度矩阵,根据直觉模糊相似度矩阵求解非规范Laplacian矩阵,在此基础上构建特征矩阵,再使用k-means算法对特征矩阵进行聚类。最后在数值算例上的应用证明了所提出算法的可行性和有效性。  相似文献   

5.
匡彪 《科技广场》2014,(8):88-92
针对声矢量DOA估计问题,根据声矢量阵的特点,结合MVDR算法的思想,本文提出了一种声矢量阵DOA估计新算法。该算法将声矢量阵振速通道的数据协方差矩阵相加得到新的协方差矩阵,然后结合声矢量阵声压通道的数据协方差矩阵,通过类似于V-MVDR算法的角度扫描过程实现目标的DOA估计,该算法无需已知信源数目且不需要特征值分解运算,具有良好的DOA方位估计和分辨性能,计算机仿真结果验证了本文算法的有效性。  相似文献   

6.
康凤  蒋小惠  冯梅 《科技通报》2014,(4):113-115
基于特征分解原理,提出一种多维空间协方差矩阵数据挖掘算法,进行了最优化特征检测性能迭代和子空间文本数据特征检测算法的设计研究。提出采用K-L变换的特征压缩器设计进行高维特征向量的特征压缩,提高算法精度和减少计算量。在子空间中将文本数据空间分解为两个空间向量,采用两个空间向量的正交特性进行降噪去伪处理和特征量的检测和提取。仿真实验对高度伪装隐形文本入侵特征检测,采用了DARPA数据库作为实验数据为研究对象,实验表明新算法能有效检测出信号出现的两个峰值,检测效果明显,检测性能较高,具有良好的入侵文本特征数据挖掘性能。  相似文献   

7.
对传统的波束域波束形成进行改进,提出一种改进的阵元域波束形成算法检测网络纠缠入侵信号。把传统的波束域旋转矢量的变换到阵元域中改善阵元域自适应算法的性能,利用纠缠入侵信号的特征值大于噪声的特征值这一性能,采用空间协方差矩阵逆的高阶次幂来逼近信号子空间,将求得权矢量投影于改进的阵元域的特征信号子空间。将求得权矢量投影于改进的阵元域的特征信号子空间,实现对网络纠缠入侵信号的检测。仿真实验表明,提出的改进的阵元域波束形成信号检测算法具有较好的自适应检测性能,计算量和信号检测稳健性有明显改善,在网络入侵检测中具有较好的工程实用价值。  相似文献   

8.
提出改进的并行化谱聚类算法。该算法对于距离矩阵与相似度矩阵进行了改进,并在其中加入了kd树技术以对大规模数据进行稀疏化处理;然后在进行数据特征计算时,将数据以拉普拉斯矩阵的方式存入Hadoop之中,通过运行Lanczos分布计算的形式得到了其向量特征;最后运用在聚类算法中的较为高效的k-means聚类算法对向量特征的转置矩阵进行处理从而得到了需要的聚类结果。仿真实验结果表明,本文所提出的谱聚类并行算法能够为大规模的数据挖掘工作带来性能的巨大提升。  相似文献   

9.
黄莉  李湘东 《情报杂志》2012,31(7):177-181,176
KNN最邻近算法是文本自动分类中最基本且常用的算法,该算法中需要计算文本之间的相似度.以Jensen-Shannon散度为例,在推导和说明其基本原理的基础之上,将其用于计算文本之间的相似度;作为对比,也使用常规的余弦值方法计算文本之间的相似度,并进而使用KNN最邻近算法对文本进行分类,以探讨不同的相似度计算方法对使用KNN最邻近算法进行文本自动分类效果的影响.多种试验材料的实证研究说明,较之于余弦值方法,基于Jensen-Shannon散度计算文本相似度的自动分类会使分类正确率更高,但会花费更长的时间.  相似文献   

10.
非负约束条件下组合证券投资决策的二次规划计算方法   总被引:2,自引:1,他引:1  
张京 《预测》1998,17(4):44-46
本文讨论协方差矩阵是正半定(但它可以是非正定)且有非负约束条件下的实现预期投资收益率的组合证券投资问题,提出一种二次规划算法,并将该算法应用于一个三元证券投资问题  相似文献   

11.
在目标跟踪算法中,目标特征描述子和模板更新策略都是跟踪成败的关键。本文在选用区域协方差矩阵描述子的基础上,结合粒子滤波器算法,并且采用两种常用的模板更新方法,分别对商场通道监控视频序列中的行人进行了跟踪仿真实验。通过仿真分析并比较不同模板更新方法条件下的跟踪性能,从而在不同情况下选择合适的模板更新方法,以得到更精确的跟踪结果。  相似文献   

12.
Multi-feature fusion has achieved gratifying performance in image retrieval. However, some existing fusion mechanisms would unfortunately make the result worse than expected due to the domain and visual diversity of images. As a result, a burning problem for applying feature fusion mechanism is how to figure out and improve the complementarity of multi-level heterogeneous features. To this end, this paper proposes an adaptive multi-feature fusion method via cross-entropy normalization for effective image retrieval. First, various low-level features (e.g., SIFT) and high-level semantic features based on deep learning are extracted. Under each level of feature representation, the initial similarity scores of the query image w.r.t. the target dataset are calculated. Second, we use an independent reference dataset to approximate the tail of the attained initial similarity score ranking curve by cross-entropy normalization. Then the area under the ranking curve is calculated as the indicator of the merit of corresponding feature (i.e., a smaller area indicates a more suitable feature.). Finally, fusion weights of each feature are assigned adaptively by the statistically elaborated areas. Extensive experiments on three public benchmark datasets have demonstrated that the proposed method can achieve superior performance compared with the existing methods, improving the metrics mAP by relatively 1.04% (for Holidays), 1.22% (for Oxf5k) and the N-S by relatively 0.04 (for UKbench), respectively.  相似文献   

13.
针对空间域隐写,分析了小波特征函数统计矩、高阶统计特征和差值像素邻接矩阵3组重要隐写特征间的互补性,利用基于互信息准则和增强特征选择的方法进行特征融合.分析和实验表明,3组特征间存在互补性,融合后能够得到更好的正确率.  相似文献   

14.
Previous federated recommender systems are based on traditional matrix factorization, which can improve personalized service but are vulnerable to gradient inference attacks. Most of them adopt model averaging to fit the data heterogeneity of federated recommender systems, requiring more training costs. To address privacy and efficiency, we propose an efficient federated item similarity model for the heterogeneous recommendation, called FedIS, which can train a global item-based collaborative filtering model to eliminate user feature dependencies. Specifically, we extend the neural item similarity model to the federated model, where each client only locally optimizes the shared item feature matrix. We then propose a fast-convergent federated aggregation method inspired by meta-learning to address heterogeneous user updates and accelerate the convergence of global training. Furthermore, we propose a two-stage perturbation method to protect both local training and transmission while reducing communication costs. Finally, extensive experiments on four real-world datasets validate that FedIS can provide more competitive performance on federated recommendations. Our proposed method also shows significant training efficiency with less performance degradation.  相似文献   

15.
On-shelf book segmentation and recognition are crucial steps in library inventory management and daily operation. In this paper, a detailed investigation of related work is conducted. RFID and barcode-based solutions suffer from expensive hardware facilities and long-term maintenance. Digital Image processing and OCR techniques are flawed due to a lack of accuracy and robustness. On this basis, we propose a visual and non-character system utilizing deep learning methods to accomplish on-shelf book segmentation and recognition tasks. Firstly, book spine masks are extracted from the image of on-shelf books by instance segmentation model, followed by affine transformation to rectangle images. Secondly, a spine feature encoder is trained to learn the deep visual features of spine images. Finally, the book inventory search space is constructed and the similarity metric between spine visual representations is calculated to recognize the target book identity. To train the models we collect high-resolution datasets of 10k-level and develop a data annotation software accordingly. For validation, we design simulated scenarios of recognizing 3.6k IDs from 5.6k book spines and achieve a best top1 accuracy of 99.18% and top5 accuracy of 99.91%. Furthermore, we develop a prototype of a mobile library management robot with embedded edge intelligence. It can automatically perform on-shelf book image capturing, spine segmentation and recognition, and target book grasping workflow.  相似文献   

16.
This paper proposes a new method for semi-supervised clustering of data that only contains pairwise relational information. Specifically, our method simultaneously learns two similarity matrices in feature space and label space, in which similarity matrix in feature space learned by adopting adaptive neighbor strategy while another one obtained through tactful label propagation approach. Moreover, the above two learned matrices explore the local structure (i.e., learned from feature space) and global structure (i.e., learned from label space) of data respectively. Furthermore, most of the existing clustering methods do not fully consider the graph structure, they can not achieve the optimal clustering performance. Therefore, our method forcibly divides the data into c clusters by adding a low rank restriction on the graphical Laplacian matrix. Finally, a restriction of alignment between two similarity matrices is imposed and all items are combined into a unified framework, and an iterative optimization strategy is leveraged to solve the proposed model. Experiments in practical data show that our method has achieved brilliant performance compared with some other state-of-the-art methods.  相似文献   

17.
Word sense disambiguation (WSD) is meant to assign the most appropriate sense to a polysemous word according to its context. We present a method for automatic WSD using only two resources: a raw text corpus and a machine-readable dictionary (MRD). The system learns the similarity matrix between word pairs from the unlabeled corpus, and it uses the vector representations of sense definitions from MRD, which are derived based on the similarity matrix. In order to disambiguate all occurrences of polysemous words in a sentence, the system separately constructs the acyclic weighted digraph (AWD) for every occurrence of polysemous words in a sentence. The AWD is structured based on consideration of the senses of context words which occur with a target word in a sentence. After building the AWD per each polysemous word, we can search the optimal path of the AWD using the Viterbi algorithm. We assign the most appropriate sense to the target word in sentences with the sense on the optimal path in the AWD. By experiments, our system shows 76.4% accuracy for the semantically ambiguous Korean words.  相似文献   

18.
Word sense disambiguation (WSD) is meant to assign the most appropriate sense to a polysemous word according to its context. We present a method for automatic WSD using only two resources: a raw text corpus and a machine-readable dictionary (MRD). The system learns the similarity matrix between word pairs from the unlabeled corpus, and it uses the vector representations of sense definitions from MRD, which are derived based on the similarity matrix. In order to disambiguate all occurrences of polysemous words in a sentence, the system separately constructs the acyclic weighted digraph (AWD) for every occurrence of polysemous words in a sentence. The AWD is structured based on consideration of the senses of context words which occur with a target word in a sentence. After building the AWD per each polysemous word, we can search the optimal path of the AWD using the Viterbi algorithm. We assign the most appropriate sense to the target word in sentences with the sense on the optimal path in the AWD. By experiments, our system shows 76.4% accuracy for the semantically ambiguous Korean words.  相似文献   

19.
Media sharing applications, such as Flickr and Panoramio, contain a large amount of pictures related to real life events. For this reason, the development of effective methods to retrieve these pictures is important, but still a challenging task. Recognizing this importance, and to improve the retrieval effectiveness of tag-based event retrieval systems, we propose a new method to extract a set of geographical tag features from raw geo-spatial profiles of user tags. The main idea is to use these features to select the best expansion terms in a machine learning-based query expansion approach. Specifically, we apply rigorous statistical exploratory analysis of spatial point patterns to extract the geo-spatial features. We use the features both to summarize the spatial characteristics of the spatial distribution of a single term, and to determine the similarity between the spatial profiles of two terms – i.e., term-to-term spatial similarity. To further improve our approach, we investigate the effect of combining our geo-spatial features with temporal features on choosing the expansion terms. To evaluate our method, we perform several experiments, including well-known feature analyzes. Such analyzes show how much our proposed geo-spatial features contribute to improve the overall retrieval performance. The results from our experiments demonstrate the effectiveness and viability of our method.  相似文献   

20.
Relation extraction aims at finding meaningful relationships between two named entities from within unstructured textual content. In this paper, we define the problem of information extraction as a matrix completion problem where we employ the notion of universal schemas formed as a collection of patterns derived from open information extraction systems as well as additional features derived from grammatical clause patterns and statistical topic models. One of the challenges with earlier work that employ matrix completion methods is that such approaches require a sufficient number of observed relation instances to be able to make predictions. However, in practice there is often insufficient number of explicit evidence supporting each relation type that could be used within the matrix model. Hence, existing work suffer from a low recall. In our work, we extend the work in the state of the art by proposing novel ways of integrating two sets of features, i.e., topic models and grammatical clause structures, for alleviating the low recall problem. More specifically, we propose that it is possible to (1) employ grammatical clause information from textual sentences to serve as an implicit indication of relation type and argument similarity. The basis for this is that it is likely that similar relation types and arguments are observed within similar grammatical structures, and (2) benefit from statistical topic models to determine similarity between relation types and arguments. We employ statistical topic models to determine relation type and argument similarity based on their co-occurrence within the same topics. We have performed extensive experiments based on both gold standard and silver standard datasets. The experiments show that our approach has been able to address the low recall problem in existing methods, by showing an improvement of 21% on recall and 8% on f-measure over the state of the art baseline.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号