共查询到19条相似文献,搜索用时 593 毫秒
1.
2.
近年尽管针对中文本文分类的研究成果不少,但基于深度学习对中文政策等长文本进行自动分类的研究还不多见。为此,借鉴和拓展传统的数据增强方法,提出集成新时代人民日报分词语料库(NEPD)、简单数据增强(EDA)算法、word2vec和文本卷积神经网络(TextCNN)的NEWT新型计算框架;实证部分,基于中国地方政府发布的科技政策文本进行算法校验。实验结果显示,在取词长度分别为500、750和1 000词的情况下,应用NEWT算法对中文科技政策文本进行分类的效果优于RCNN、Bi-LSTM和CapsNet等传统深度学习模型,F1值的平均提升比例超过13%;同时,NEWT在较短取词长度下能够实现全文输入的近似效果,可以部分改善传统深度学习模型在中文长文本自动分类任务中的计算效率。 相似文献
3.
4.
利用层次分析法(AHP)的原理和方法,探讨了中文文本分类系统影响因素的评价问题。首先,提出了影响文本分类系统性能的指标体系,建立了文本分类系统评价的层次结构模型;其次,根据专家调查的结果,构建比较判断矩阵;最后,利用AHP专用软件Expert Choice计算各层次评价指标的权重,并对结果进行了分析说明。 相似文献
5.
为了提高文本分类的准确性和效率,提出了一种基于潜在语义分析和超球支持向量机的文本分类模型.针对SVM对大规模文本分类时收敛速度较慢这一缺点,本文将超球支持向量机应用于文本分类,采用基于增量学习的超球支持向量机分类学习算法进行训练和分类.实验结果表明,超球支持向量机是一种解决SVM问题的有效方法,在文本分类应用中具有与SVM相当的精度,但是明显降低了模型复杂度和训练时间. 相似文献
6.
论文设计实现中文搜索网页分类系统,包括:关键字搜索结果网页类型判断方法,网页主题内容提取.对于不容易分类的网页,采用基于摘要的网页搜索结果聚类和基于学习的网页搜索结果分类器设计方法.最后,构造中文文本分类器,并编程实现,通过实例测试分类器性能. 相似文献
7.
8.
9.
文章主要是结合电子政务信息的特点,对中文文本分类技术在电子政务中的应用进行探讨,指出当前中文文本分类研究存在的问题,提出在电子政务中应用时的建议.最后指出了加强电子政务的电子词典建设是促进自动分类技术在电子政务中广泛应用的一个重要工作. 相似文献
10.
11.
借助文本分类系统软件,采用来自10个大类的中文文本数据,按照训练集与测试集2:1的比例,使用KNN和SVM分类算法,对数据集进行自动分类的实验。旨在通过具体的语料库实验,探讨文本自动分类的关键技术,分析、比较与评价实验结果,探讨文本分类中具体参数的设置和不同分类算法之优劣。 相似文献
12.
Cross-lingual text categorization: Conquering language boundaries in globalized environments 总被引:1,自引:0,他引:1
Text categorization pertains to the automatic learning of a text categorization model from a training set of preclassified documents on the basis of their contents and the subsequent assignment of unclassified documents to appropriate categories. Most existing text categorization techniques deal with monolingual documents (i.e., written in the same language) during the learning of the text categorization model and category assignment (or prediction) for unclassified documents. However, with the globalization of business environments and advances in Internet technology, an organization or individual may generate and organize into categories documents in one language and subsequently archive documents in different languages into existing categories, which necessitate cross-lingual text categorization (CLTC). Specifically, cross-lingual text categorization deals with learning a text categorization model from a set of training documents written in one language (e.g., L1) and then classifying new documents in a different language (e.g., L2). Motivated by the significance of this demand, this study aims to design a CLTC technique with two different category assignment methods, namely, individual- and cluster-based. Using monolingual text categorization as a performance reference, our empirical evaluation results demonstrate the cross-lingual capability of the proposed CLTC technique. Moreover, the classification accuracy achieved by the cluster-based category assignment method is statistically significantly higher than that attained by the individual-based method. 相似文献
13.
In this paper, a Generalized Cluster Centroid based Classifier (GCCC) and its variants for text categorization are proposed by utilizing a clustering algorithm to integrate two well-known classifiers, i.e., the K-nearest-neighbor (KNN) classifier and the Rocchio classifier. KNN, a lazy learning method, suffers from inefficiency in online categorization while achieving remarkable effectiveness. Rocchio, which has efficient categorization performance, fails to obtain an expressive categorization model due to its inherent linear separability assumption. Our proposed method mainly focuses on two points: one point is that we use a clustering algorithm to strengthen the expressiveness of the Rocchio model; another one is that we employ the improved Rocchio model to speed up the categorization process of KNN. Extensive experiments conducted on both English and Chinese corpora show that GCCC and its variants have better categorization ability than some state-of-the-art classifiers, i.e., Rocchio, KNN and Support Vector Machine (SVM). 相似文献
14.
This paper examines several different approaches to exploiting structural information in semi-structured document categorization. The methods under consideration are designed for categorization of documents consisting of a collection of fields, or arbitrary tree-structured documents that can be adequately modeled with such a flat structure. The approaches range from trivial modifications of text modeling to more elaborate schemes, specifically tailored to structured documents. We combine these methods with three different text classification algorithms and evaluate their performance on four standard datasets containing different types of semi-structured documents. The best results were obtained with stacking, an approach in which predictions based on different structural components are combined by a meta classifier. A further improvement of this method is achieved by including the flat text model in the final prediction. 相似文献
15.
16.
Chun-Yan Liang Li GuoZhao-Jie Xia Feng-Guang NieXiao-Xia Li Liang SuZhang-Yuan Yang 《Information processing & management》2006
A new dictionary-based text categorization approach is proposed to classify the chemical web pages efficiently. Using a chemistry dictionary, the approach can extract chemistry-related information more exactly from web pages. After automatic segmentation on the documents to find dictionary terms for document expansion, the approach adopts latent semantic indexing (LSI) to produce the final document vectors, and the relevant categories are finally assigned to the test document by using the k-NN text categorization algorithm. The effects of the characteristics of chemistry dictionary and test collection on the categorization efficiency are discussed in this paper, and a new voting method is also introduced to improve the categorization performance further based on the collection characteristics. The experimental results show that the proposed approach has the superior performance to the traditional categorization method and is applicable to the classification of chemical web pages. 相似文献
17.
在文本自动分类中,目前有词频和文档频率统计这两种概率估算方法,采用的估算方法恰当与否会直接影响特征抽取的质量与分类的准确度。本文采用K最近邻算法实现中文文本分类器,在中文平衡与非平衡两种训练语料下进行了训练与分类实验,实验数据表明使用非平衡语料语料时,可以采用基于词频的概率估算方法,使用平衡语料语料时,采用基于文档频率的概率估算方法,能够有效地提取高质量的文本特征,从而提高分类的准确度。 相似文献
18.
A challenge for sentence categorization and novelty mining is to detect not only when text is relevant to the user’s information need, but also when it contains something new which the user has not seen before. It involves two tasks that need to be solved. The first is identifying relevant sentences (categorization) and the second is identifying new information from those relevant sentences (novelty mining). Many previous studies of relevant sentence retrieval and novelty mining have been conducted on the English language, but few papers have addressed the problem of multilingual sentence categorization and novelty mining. This is an important issue in global business environments, where mining knowledge from text in a single language is not sufficient. In this paper, we perform the first task by categorizing Malay and Chinese sentences, then comparing their performances with that of English. Thereafter, we conduct novelty mining to identify the sentences with new information. Experimental results on TREC 2004 Novelty Track data show similar categorization performance on Malay and English sentences, which greatly outperform Chinese. In the second task, it is observed that we can achieve similar novelty mining results for all three languages, which indicates that our algorithm is suitable for novelty mining of multilingual sentences. In addition, after benchmarking our results with novelty mining without categorization, it is learnt that categorization is necessary for the successful performance of novelty mining. 相似文献
19.
Bassam Al-Salemi Masri Ayob Graham Kendall Shahrul Azman Mohd Noah 《Information processing & management》2019,56(1):212-227
Multi-label text categorization refers to the problem of assigning each document to a subset of categories by means of multi-label learning algorithms. Unlike English and most other languages, the unavailability of Arabic benchmark datasets prevents evaluating multi-label learning algorithms for Arabic text categorization. As a result, only a few recent studies have dealt with multi-label Arabic text categorization on non-benchmark and inaccessible datasets. Therefore, this work aims to promote multi-label Arabic text categorization through (a) introducing “RTAnews”, a new benchmark dataset of multi-label Arabic news articles for text categorization and other supervised learning tasks. The benchmark is publicly available in several formats compatible with the existing multi-label learning tools, such as MEKA and Mulan. (b) Conducting an extensive comparison of most of the well-known multi-label learning algorithms for Arabic text categorization in order to have baseline results and show the effectiveness of these algorithms for Arabic text categorization on RTAnews. The evaluation involves four multi-label transformation-based algorithms: Binary Relevance, Classifier Chains, Calibrated Ranking by Pairwise Comparison and Label Powerset, with three base learners (Support Vector Machine, k-Nearest-Neighbors and Random Forest); and four adaptation-based algorithms (Multi-label kNN, Instance-Based Learning by Logistic Regression Multi-label, Binary Relevance kNN and RFBoost). The reported baseline results show that both RFBoost and Label Powerset with Support Vector Machine as base learner outperformed other compared algorithms. Results also demonstrated that adaptation-based algorithms are faster than transformation-based algorithms. 相似文献