首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 593 毫秒
1.
基于SVM与KNN的中文文本分类比较实证研究   总被引:1,自引:0,他引:1  
本文详细介绍了中文文本分类过程以及SVM和KNN两种方法在中文文本分类中的具体步骤,给出了中文文本分类的模型。通过实验对SVM算法和传统的KNN算法应用于文本分类效果进行了比较性实证研究。研究表明,SVM分类器较KNN在处理中文文本分类问题上有更良好的分类效果,有较高的查全率和查准率。  相似文献   

2.
近年尽管针对中文本文分类的研究成果不少,但基于深度学习对中文政策等长文本进行自动分类的研究还不多见。为此,借鉴和拓展传统的数据增强方法,提出集成新时代人民日报分词语料库(NEPD)、简单数据增强(EDA)算法、word2vec和文本卷积神经网络(TextCNN)的NEWT新型计算框架;实证部分,基于中国地方政府发布的科技政策文本进行算法校验。实验结果显示,在取词长度分别为500、750和1 000词的情况下,应用NEWT算法对中文科技政策文本进行分类的效果优于RCNN、Bi-LSTM和CapsNet等传统深度学习模型,F1值的平均提升比例超过13%;同时,NEWT在较短取词长度下能够实现全文输入的近似效果,可以部分改善传统深度学习模型在中文长文本自动分类任务中的计算效率。  相似文献   

3.
根据软件工程的基本原理在Ubuntu操作系统环境下使用Eclipse开发工具,设计并实现了基于Hadoop系统架构的NaiveBayes算法文本分类系统。系统将大量中文文本数据集存储在分布式文件系统HDFS上,通过MapReduce并行计算模型和Ansj中文分词库对中文数据集进行分词,采用TF-IDF算法进行文本特征抽取,最后基于Spark并行计算框架和NaiveBayes算法对特征数据集进行模型训练,得到文本分类模型,将文本分类服务集成到Web页面。系统基本实现了文本的正确分类。  相似文献   

4.
张成宝  王志玲 《情报杂志》2007,26(10):33-35
利用层次分析法(AHP)的原理和方法,探讨了中文文本分类系统影响因素的评价问题。首先,提出了影响文本分类系统性能的指标体系,建立了文本分类系统评价的层次结构模型;其次,根据专家调查的结果,构建比较判断矩阵;最后,利用AHP专用软件Expert Choice计算各层次评价指标的权重,并对结果进行了分析说明。  相似文献   

5.
为了提高文本分类的准确性和效率,提出了一种基于潜在语义分析和超球支持向量机的文本分类模型.针对SVM对大规模文本分类时收敛速度较慢这一缺点,本文将超球支持向量机应用于文本分类,采用基于增量学习的超球支持向量机分类学习算法进行训练和分类.实验结果表明,超球支持向量机是一种解决SVM问题的有效方法,在文本分类应用中具有与SVM相当的精度,但是明显降低了模型复杂度和训练时间.  相似文献   

6.
论文设计实现中文搜索网页分类系统,包括:关键字搜索结果网页类型判断方法,网页主题内容提取.对于不容易分类的网页,采用基于摘要的网页搜索结果聚类和基于学习的网页搜索结果分类器设计方法.最后,构造中文文本分类器,并编程实现,通过实例测试分类器性能.  相似文献   

7.
针对向量空间模型中语义缺失问题,将语义词典(知网)应用到文本分类的过程中以提高文本分类的准确度。对于中文文本中的一词多义现象,提出改进的词汇语义相似度计算方法,通过词义排歧选取义项进行词语的相似度计算,将相似度大于阈值的词语进行聚类,对文本特征向量进行降维,给出基于语义的文本分类算法,并对该算法进行实验分析。结果表明,该算法可有效提高中文文本分类效果。  相似文献   

8.
《科技风》2016,(2)
通过对中文文本中二元组进行分类,可以提取出文本中的中文词汇。研究中文二元组的组合规律,抽取二元组的词频、邻接熵、二元组概率、互信息值、卡方值等多个特征。利用机器学习的方法将二元组分为二元词、非词和待扩展词三类,实现中文词汇的自动提取。实验分别采了用朴素贝叶斯模型和决策树算模型进行训练,利用模型预测中文二元组,抽取中文词汇。实验结果表明,决策树算模型分类效果较好,准确率70.3%,召回率73.5%,F1值71.9%。  相似文献   

9.
文章主要是结合电子政务信息的特点,对中文文本分类技术在电子政务中的应用进行探讨,指出当前中文文本分类研究存在的问题,提出在电子政务中应用时的建议.最后指出了加强电子政务的电子词典建设是促进自动分类技术在电子政务中广泛应用的一个重要工作.  相似文献   

10.
[目的/意义]旨在为提高获取开源军事情报效率提供参考.[方法/过程]对互联网上的开源文本信息进行分析处理,利用基于机器学习的文本分类方法从中筛选出军事类文本信息,并分析文本向量空间模型与分类模型对于开源军事情报提取效果的影响.[结果/结论]文本分类方法具有较高的准确率、召回率、F-score,实现了中文开源军事情报的分...  相似文献   

11.
借助文本分类系统软件,采用来自10个大类的中文文本数据,按照训练集与测试集2:1的比例,使用KNN和SVM分类算法,对数据集进行自动分类的实验。旨在通过具体的语料库实验,探讨文本自动分类的关键技术,分析、比较与评价实验结果,探讨文本分类中具体参数的设置和不同分类算法之优劣。  相似文献   

12.
Text categorization pertains to the automatic learning of a text categorization model from a training set of preclassified documents on the basis of their contents and the subsequent assignment of unclassified documents to appropriate categories. Most existing text categorization techniques deal with monolingual documents (i.e., written in the same language) during the learning of the text categorization model and category assignment (or prediction) for unclassified documents. However, with the globalization of business environments and advances in Internet technology, an organization or individual may generate and organize into categories documents in one language and subsequently archive documents in different languages into existing categories, which necessitate cross-lingual text categorization (CLTC). Specifically, cross-lingual text categorization deals with learning a text categorization model from a set of training documents written in one language (e.g., L1) and then classifying new documents in a different language (e.g., L2). Motivated by the significance of this demand, this study aims to design a CLTC technique with two different category assignment methods, namely, individual- and cluster-based. Using monolingual text categorization as a performance reference, our empirical evaluation results demonstrate the cross-lingual capability of the proposed CLTC technique. Moreover, the classification accuracy achieved by the cluster-based category assignment method is statistically significantly higher than that attained by the individual-based method.  相似文献   

13.
In this paper, a Generalized Cluster Centroid based Classifier (GCCC) and its variants for text categorization are proposed by utilizing a clustering algorithm to integrate two well-known classifiers, i.e., the K-nearest-neighbor (KNN) classifier and the Rocchio classifier. KNN, a lazy learning method, suffers from inefficiency in online categorization while achieving remarkable effectiveness. Rocchio, which has efficient categorization performance, fails to obtain an expressive categorization model due to its inherent linear separability assumption. Our proposed method mainly focuses on two points: one point is that we use a clustering algorithm to strengthen the expressiveness of the Rocchio model; another one is that we employ the improved Rocchio model to speed up the categorization process of KNN. Extensive experiments conducted on both English and Chinese corpora show that GCCC and its variants have better categorization ability than some state-of-the-art classifiers, i.e., Rocchio, KNN and Support Vector Machine (SVM).  相似文献   

14.
This paper examines several different approaches to exploiting structural information in semi-structured document categorization. The methods under consideration are designed for categorization of documents consisting of a collection of fields, or arbitrary tree-structured documents that can be adequately modeled with such a flat structure. The approaches range from trivial modifications of text modeling to more elaborate schemes, specifically tailored to structured documents. We combine these methods with three different text classification algorithms and evaluate their performance on four standard datasets containing different types of semi-structured documents. The best results were obtained with stacking, an approach in which predictions based on different structural components are combined by a meta classifier. A further improvement of this method is achieved by including the flat text model in the final prediction.  相似文献   

15.
萧莉明  于宽  蔡珣 《现代情报》2007,27(4):146-147,150
本文设计了一个有效的基于贝叶斯分类器的中文期刊自动分类系统。首先,该系统以期刊的名称作为惟一的标引内容,并利用自动分词技术将期刊名称分成待分类的样本集;其次,通过对图书馆的样本数据进行训练建立的分类库,本文使用贝叶斯分类器实现中文期刊的自动分类。实验结果表明,该分类器对中文期刊的分类具有很好的高效性和准确性。  相似文献   

16.
A new dictionary-based text categorization approach is proposed to classify the chemical web pages efficiently. Using a chemistry dictionary, the approach can extract chemistry-related information more exactly from web pages. After automatic segmentation on the documents to find dictionary terms for document expansion, the approach adopts latent semantic indexing (LSI) to produce the final document vectors, and the relevant categories are finally assigned to the test document by using the k-NN text categorization algorithm. The effects of the characteristics of chemistry dictionary and test collection on the categorization efficiency are discussed in this paper, and a new voting method is also introduced to improve the categorization performance further based on the collection characteristics. The experimental results show that the proposed approach has the superior performance to the traditional categorization method and is applicable to the classification of chemical web pages.  相似文献   

17.
在文本自动分类中,目前有词频和文档频率统计这两种概率估算方法,采用的估算方法恰当与否会直接影响特征抽取的质量与分类的准确度。本文采用K最近邻算法实现中文文本分类器,在中文平衡与非平衡两种训练语料下进行了训练与分类实验,实验数据表明使用非平衡语料语料时,可以采用基于词频的概率估算方法,使用平衡语料语料时,采用基于文档频率的概率估算方法,能够有效地提取高质量的文本特征,从而提高分类的准确度。  相似文献   

18.
A challenge for sentence categorization and novelty mining is to detect not only when text is relevant to the user’s information need, but also when it contains something new which the user has not seen before. It involves two tasks that need to be solved. The first is identifying relevant sentences (categorization) and the second is identifying new information from those relevant sentences (novelty mining). Many previous studies of relevant sentence retrieval and novelty mining have been conducted on the English language, but few papers have addressed the problem of multilingual sentence categorization and novelty mining. This is an important issue in global business environments, where mining knowledge from text in a single language is not sufficient. In this paper, we perform the first task by categorizing Malay and Chinese sentences, then comparing their performances with that of English. Thereafter, we conduct novelty mining to identify the sentences with new information. Experimental results on TREC 2004 Novelty Track data show similar categorization performance on Malay and English sentences, which greatly outperform Chinese. In the second task, it is observed that we can achieve similar novelty mining results for all three languages, which indicates that our algorithm is suitable for novelty mining of multilingual sentences. In addition, after benchmarking our results with novelty mining without categorization, it is learnt that categorization is necessary for the successful performance of novelty mining.  相似文献   

19.
Multi-label text categorization refers to the problem of assigning each document to a subset of categories by means of multi-label learning algorithms. Unlike English and most other languages, the unavailability of Arabic benchmark datasets prevents evaluating multi-label learning algorithms for Arabic text categorization. As a result, only a few recent studies have dealt with multi-label Arabic text categorization on non-benchmark and inaccessible datasets. Therefore, this work aims to promote multi-label Arabic text categorization through (a) introducing “RTAnews”, a new benchmark dataset of multi-label Arabic news articles for text categorization and other supervised learning tasks. The benchmark is publicly available in several formats compatible with the existing multi-label learning tools, such as MEKA and Mulan. (b) Conducting an extensive comparison of most of the well-known multi-label learning algorithms for Arabic text categorization in order to have baseline results and show the effectiveness of these algorithms for Arabic text categorization on RTAnews. The evaluation involves four multi-label transformation-based algorithms: Binary Relevance, Classifier Chains, Calibrated Ranking by Pairwise Comparison and Label Powerset, with three base learners (Support Vector Machine, k-Nearest-Neighbors and Random Forest); and four adaptation-based algorithms (Multi-label kNN, Instance-Based Learning by Logistic Regression Multi-label, Binary Relevance kNN and RFBoost). The reported baseline results show that both RFBoost and Label Powerset with Support Vector Machine as base learner outperformed other compared algorithms. Results also demonstrated that adaptation-based algorithms are faster than transformation-based algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号