首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
In this paper, cyclic codes are studied over some finite quaternion integer rings with respect to the quaternion Mannheim distance.  相似文献   

2.
One of the main challenges of coding theory is to construct linear codes with the best possible parameters. Various algebraic and combinatorial methods along with computer searches are used to construct codes with better parameters. Given the computational complexity of determining the minimum distance of a code, exhaustive searches are not feasible for all but small parameter sets. Therefore, codes with certain algebraic structures are preferred for both theoretical and practical reasons. In this work we focus on the class of constacyclic codes to first generate all constacyclic codes exhaustively over small finite fields of order up to 9 to create a database of best constacyclic codes. We will then use this database as a building block for a search algorithm for new quasi-twisted codes. Our search on constacyclic codes has revealed 16 new codes, i.e. codes with better parameters than currently best-known linear codes. Given that constacyclic codes are well known, this is a surprising result. Moreover, using the standard constructions of puncturing, shortening or extending a given code, we also derived 55 additional new codes from these constacyclic codes. Hence, we achieved improvements on 71 entries in the database of best-known codes. We use a search strategy that is comprehensive, i.e. it computes every constacyclic code for a given length and shift constant, and it avoids redundantly examining constacyclic codes that are equivalent to either cyclic codes or other constacyclic codes.  相似文献   

3.
The communication systems and disk or tape memory can sometimes cause clusters of errors, namely burst errors. There have been many designs in order to detect and even correct such errors. Recently, a new class of codes called m-spotty byte error correcting codes has found applications in high speed memory systems that employ random access memory chips wide Input/Output data, especially at 8, 16, or 32 bits. The MacWilliams identity provides the relationship between the weight distribution of a code and that of its dual code. Also, an interesting new metric called Rosenbloom–Tsfasman metric has been studied as an alternative metric for linear codes recently. In this paper, we combine these two interesting topics and introduce the m-spotty Rosenbloom–Tsfasman weights and the m-spotty Rosenbloom–Tsfasman weight enumerator of a binary code. Moreover, we prove a MacWilliams identity for the m-spotty Rosenbloom–Tsfasman weight enumerators.  相似文献   

4.
Low-density generator matrix (LDGM) codes have recently drawn researchers’ attention thanks to their satisfying performance while requiring only moderate encoding/decoding complexities as well as to their applicability to network codes. In this paper, we aim to propose a fast simulation method useful to investigate the performance of LDGM code. Supported by the confidence interval analysis, the presented method is, for example, 108 times quicker than the Monte-Carlo computer simulation for bit-error-rate (BER) in 10−10 region.  相似文献   

5.
n-grams have been used widely and successfully for approximate string matching in many areas. s-grams have been introduced recently as an n-gram based matching technique, where di-grams are formed of both adjacent and non-adjacent characters. s-grams have proved successful in approximate string matching across language boundaries in Information Retrieval (IR). s-grams however lack precise definitions. Also their similarity comparison lacks precise definition. In this paper, we give precise definitions for both. Our definitions are developed in a bottom-up manner, only assuming character strings and elementary mathematical concepts. Extending established practices, we provide novel definitions of s-gram profiles and the L1 distance metric for them. This is a stronger string proximity measure than the popular Jaccard similarity measure because Jaccard is insensitive to the counts of each n-gram in the strings to be compared. However, due to the popularity of Jaccard in IR experiments, we define the reduction of s-gram profiles to binary profiles in order to precisely define the (extended) Jaccard similarity function for s-grams. We also show that n-gram similarity/distance computations are special cases of our generalized definitions.  相似文献   

6.
In this paper we propose a new event generator, which has strong noise-filtering capabilities, to be used in event-based control systems with a PIDPlus controller. An approximate frequency analysis is performed in order to characterize the event generator system and tuning guidelines are provided for its design parameter. Simulation and experimental results obtained with a laboratory setup demonstrate the effectiveness of the methodology in providing a satisfactory performance related to set-point and load disturbance step responses with a total variation that is significantly reduced with respect to the standard cases.  相似文献   

7.
Motivated by the work in [1] of Abualrub and Siap (2009), we investigate (1+λu)-constacyclic codes over Fp[u]/〈um〉 of an arbitrary length, where λ is a nonzero element of Fp. We find the generator polynomials of (1+λu)-constacyclic codes over Fp[u]/〈um〉, and determine the number of (1+λu)-constacyclic codes over Fp[u]/〈um〉 for a given length, as well as the number of codewords in each such code. Some optimal linear codes over F3 and F5 are constructed from (1+λu)-constacyclic codes over Fp+uFp under a Gray map.  相似文献   

8.
梁烈勇 《大众科技》2011,(11):59-60
低密度奇偶校验(LDPC)码是一种具有稀疏奇偶校验矩阵的线性分组码,其性能可逼近香农限。由于其具有优异的性能,非常适合在未来的无线通信系统中应用,已成为信道编码领域的研究热点。DVB—s2是欧洲数字视频广播项目组在2003年开发出来的第二代卫星广播标准,采用了LDPC码作为前向纠错码。文章在分析LDPC码的基础上,研究...  相似文献   

9.
Text simplification and text summarisation are related, but different sub-tasks in Natural Language Generation. Whereas summarisation attempts to reduce the length of a document, whilst keeping the original meaning, simplification attempts to reduce the complexity of a document. In this work, we combine both tasks of summarisation and simplification using a novel hybrid architecture of abstractive and extractive summarisation called HTSS. We extend the well-known pointer generator model for the combined task of summarisation and simplification. We have collected our parallel corpus from the simplified summaries written by domain experts published on the science news website EurekaAlert (www.eurekalert.org). Our results show that our proposed HTSS model outperforms neural text simplification (NTS) on SARI score and abstractive text summarisation (ATS) on the ROUGE score. We further introduce a new metric (CSS1) which combines SARI and Rouge and demonstrates that our proposed HTSS model outperforms NTS and ATS on the joint task of simplification and summarisation by 38.94% and 53.40%, respectively. We provide all code, models and corpora to the scientific community for future research at the following URL: https://github.com/slab-itu/HTSS/.  相似文献   

10.
HCMU度量是一种在Riemann面上带奇点的extremal度量. 它可以被一个亚纯1-形式所刻划. 文章给出这个亚纯1-形式的一些重要性质. 作为应用,将证明一个HCMU度量的高斯曲率光滑的充要条件.  相似文献   

11.
王静 《科教文汇》2011,(23):10-10,14
教育的成功来源于对学生的爱。只有教师把真诚的爱投入到教育当中,把爱的甘泉洒向每个学生的心田,才能得到学生对教师的尊敬、信任和亲近,形成教育学生的感情基础,缩短师生的心理距离。从而,促进学生健康地成长。  相似文献   

12.
肖德华 《中国科技纵横》2011,(17):151-151,162
本文研究紧致连通定向光滑n(n≥3)维流形M^n上一类由黎曼曲率张量、Ricci曲率张量的L^2模和数量曲率的平方的细合殁关于度量g的体积元的合适幂法化后定义的黎曼泛函F的临界度量,采用活动标架法得到泛函F的Euler-Lagrange方程,以及任意Einstein度量是泛函F的临界度量的一些充分条件.  相似文献   

13.
Relation extraction aims at finding meaningful relationships between two named entities from within unstructured textual content. In this paper, we define the problem of information extraction as a matrix completion problem where we employ the notion of universal schemas formed as a collection of patterns derived from open information extraction systems as well as additional features derived from grammatical clause patterns and statistical topic models. One of the challenges with earlier work that employ matrix completion methods is that such approaches require a sufficient number of observed relation instances to be able to make predictions. However, in practice there is often insufficient number of explicit evidence supporting each relation type that could be used within the matrix model. Hence, existing work suffer from a low recall. In our work, we extend the work in the state of the art by proposing novel ways of integrating two sets of features, i.e., topic models and grammatical clause structures, for alleviating the low recall problem. More specifically, we propose that it is possible to (1) employ grammatical clause information from textual sentences to serve as an implicit indication of relation type and argument similarity. The basis for this is that it is likely that similar relation types and arguments are observed within similar grammatical structures, and (2) benefit from statistical topic models to determine similarity between relation types and arguments. We employ statistical topic models to determine relation type and argument similarity based on their co-occurrence within the same topics. We have performed extensive experiments based on both gold standard and silver standard datasets. The experiments show that our approach has been able to address the low recall problem in existing methods, by showing an improvement of 21% on recall and 8% on f-measure over the state of the art baseline.  相似文献   

14.
Analysis of arithmetic coding for data compression   总被引:1,自引:0,他引:1  
Arithmetic coding, in conjunction with a suitable probabilistic model, can provide nearly optimal data compression. In this article we analyze the effect that the model and the particular implementation of arithmetic coding have on the code length obtained. Periodic scaling is often used in arithmetic coding implementations to reduce time and storage requirements, it also introduces a recency effect which can further affect compression. Our main contribution is introducing the concept of weighted entropy and using it to characterize in an elegant way the effect that periodic scaling has on the code length. We explain why and by how much scaling increases the code length for files with a homogeneous distribution of symbols, and we characterize the reduction in code length due to scaling for files exhibiting locality of reference. We also give a rigorous proof that the coding effects of rounding scaled weights, using integer arithmetic, and encoding end-of-file are negligible.  相似文献   

15.
We develop a new transmission scheme for additive white Gaussian noisy (AWGN) channels based on Fuchsian groups from rational quaternion algebras. The structure of the proposed Fuchsian codes is nonlinear and nonuniform, hence conventional decoding methods based on linearity and symmetry do not apply. Previously, only brute force decoding methods with complexity that is linear in the code size exist for general nonuniform codes. However, the properly discontinuous character of the action of the Fuchsian groups on the complex upper half-plane translates into decoding complexity that is logarithmic in the code size via a recently introduced point reduction algorithm.  相似文献   

16.
针对传统决策支持系统(DSS)面对非/半结构化决策任务(问题)求解能力的不足,把具有特殊智慧和能力的人定义成人件,并以提供人件服务的形式融入新型决策系统。以决策任务求解为驱动,人件服务、软件服务和硬件服务"三件"紧密耦合、共同作用,形成新型决策系统模型体系架构。进而设计了基于网络赋能的人件服务、软件服务和硬件服务统一管理、查询、调用等技术方案,建立起一个包含三个层次九个技术的人件服务技术体系(Technological System for HwS,TS4HwS),该体系的建立支持了新型决策系统模型。  相似文献   

17.
We consider a challenging clustering task: the clustering of multi-word terms without document co-occurrence information in order to form coherent groups of topics. For this task, we developed a methodology taking as input multi-word terms and lexico-syntactic relations between them. Our clustering algorithm, named CPCL is implemented in the TermWatch system. We compared CPCL to other existing clustering algorithms, namely hierarchical and partitioning (k-means, k-medoids). This out-of-context clustering task led us to adapt multi-word term representation for statistical methods and also to refine an existing cluster evaluation metric, the editing distance in order to evaluate the methods. Evaluation was carried out on a list of multi-word terms from the genomic field which comes with a hand built taxonomy. Results showed that while k-means and k-medoids obtained good scores on the editing distance, they were very sensitive to term length. CPCL on the other hand obtained a better cluster homogeneity score and was less sensitive to term length. Also, CPCL showed good adaptability for handling very large and sparse matrices.  相似文献   

18.
测试压缩可以在没有故障覆盖率损失的情况下,极大地降低测试时间和测试数据量,弥补了测试设备和芯片制造能力提升之间的差距,受到学术界和工业界的广泛关注。测试数据分为测试激励和测试响应两种,测试压缩也对应分为测试激励压缩和测试响应压缩两个方面。本文针对这两方面分别展开了研究。本文的主要贡献包含:1. 提出了一种Variable-Tail编码。Variable-Tail是一种变长-变长的编码,对于X位密度比较高的测试向量能够取得更高的测试压缩率。实验数据表明,如结合测试向量排序算法,则使用Variable-Tail编码可以取得很接近于编码压缩理论上界的压缩效果(平均差距在1.26%左右),同时还可以减少20%的测试功耗。2. 提出了一种并行芯核外壳设计方法。研究发现了测试向量中存在着扫描切片重叠和部分重叠现象。当多个扫描切片重叠时,它们仅需要装载一次,这样就可以大大减少测试时间和测试数据量。实验结果表明,使用并行外壳设计,测试时间可以减少到原来的2/3,测试功耗可以减少到原来的1/15。 3.提出了3X测试压缩结构。3X测试压缩结构包含了三个主要技术:X-Config 激励压缩、X-Balance测试产生和X-Tolerant响应压缩。 X-Config激励压缩提出了一个周期可重构的MUX网络。X-Balance 测试产生综合考虑了动态压缩、测试数据压缩和扫描设计等因素,产生测试向量。它使用了回溯消除算法和基于确定位概率密度的扫描链设计算法,减少测试向量体积。X-Tolerant 响应压缩提出了一种单输出的基于卷积编码的压缩电路。该压缩电路只需要一个数据,因此总能保证最大的压缩率。同时为了提高对X位的容忍能力,还提出了一个多权重的基本校验矩阵生成算法。  相似文献   

19.
一种基于自修改代码技术的软件保护方法   总被引:1,自引:0,他引:1  
提出一种基于自修改代码(SMC)技术的软件保护方法,该方法通过将关键代码转换为数据存储在原程序中,以隐藏关键代码;受保护的可执行文件执行过程中,通过修改进程中存储有隐藏代码的虚拟内存页面属性为可执行,实现数据到可执行代码的转换. 实验证明,此软件保护方法简单,易实现,可以有效提高SMC的抗逆向分析能力.  相似文献   

20.
Hashing has been an emerging topic and has recently attracted widespread attention in multi-modal similarity search applications. However, most existing approaches rely on relaxation schemes to generate binary codes, leading to large quantization errors. In addition, amounts of existing approaches embed labels into the pairwise similarity matrix, leading to expensive time and space costs and losing category information. To address these issues, we propose an Efficient Discrete Matrix factorization Hashing (EDMH). Specifically, EDMH first learns the latent subspaces for individual modality through matrix factorization strategy, which preserves the semantic structure representation information of each modality. In particular, we develop a semantic label offset embedding learning strategy, improving the stability of label embedding regression. Furthermore, we design an efficient discrete optimization scheme to generate compact binary codes discretely. Eventually, we present two efficient learning strategies EDMH-L and EDMH-S to pursue high-quality hash functions. Extensive experiments on various widely-used databases verify that the proposed algorithms produce significant performance and outperform some state-of-the-art approaches, with an average improvement of 2.50% (for Wiki), 2.66% (for MIRFlickr) and 2.25% (for NUS-WIDE) over the best available results, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号