首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 234 毫秒
1.
张宁  朱礼军 《情报工程》2016,2(1):032-042
自动问答系统成为近年来自然语言处理领域的研究热点,问句分析作为问答系统的首要环节,在问答系统中起着关键的作用.简要介绍了中文问句分析的基本内容,主要包括分词、词性标注以及句法分析的发展;同时也对中文问句分析中问句分类和问句语义分析的研究内容进行了重点介绍;最后,提出中文问句分析面临的一些难点问题以及对未来可能研究方向的一个初步展望.  相似文献   

2.
随着大数据时代的到来,问答系统成为人们获取信息的有效手段之一。作为问答系统关键一环的问句分类直接影响系统的性能。目前,问句分类研究主要集中在现代汉语领域,对中华典籍的问句分类研究还不多见。本文从问句分类概念出发,在生成中华典籍问句分类语料集的基础上,设计了面向中华典籍的问句分类体系,并对支持向量机、循环神经网络、长短时记忆神经网络、双向长短时记忆神经网络、BERT等模型的问句分类性能进行了比较研究。实验结果表明,与支持向量机和传统深度学习模型相比,BERT模型具有更优的问句分类能力,在本文提出的问句分类体系上,F1值达到95.55%,BERT模型在中华典籍问句分类任务中具有一定优势,具有一定的推广和应用价值。  相似文献   

3.
汉语框架网络问答系统的问句分析设计与实现   总被引:1,自引:0,他引:1  
利用框架语义学原理,构建出面向问句分析的语义框架——Q框架,在此基础上实现对问句的语义分析。从语义规则角度提出问句分析设计的思路:基于依存句法树确定不同类型问句的目标词,采取模式匹配方法实现基于Q框架的问句语义分析,通过映射完成对问句的框架语义标注,最终确定问句焦点和问句类型。  相似文献   

4.
针对图书出版领域的常用问题集研制自动问答系统,重点解决问句索引与检索问题。在问句索引中提出结合分词与词性标注、浅层语义分析等方法来索引问句;在问句检索中提出基于特征向量空间和语义类的方法来计算问句相似度。最后对该系统进行实现。  相似文献   

5.
新冠病毒肺炎疫情已经步入后疫情时期,以互联网用户为研究对象,探究后疫情时期民众的健康信息需求,以期为相关部门的信息发布和相关公共政策制定提供依据.文章以社会化问答平台"知乎"为例,利用LDA主题模型与人工标注相结合的方法构建用户健康信息需求编码体系,并通过周期划分,结合内容分析、主题分析、需求强度分析等,从时间和需求主...  相似文献   

6.
专利术语抽取是专利文献信息抽取领域的一项重要任务,有助于专利领域词表的构建,有利于中文分词、句法分析、语法分析等工作的进行。文章通过分析专利术语的特点并制定相应的语料标注规则进行人工标注,采用条件随机场(conditional random fields,CRFs)对标注后的数据进行训练和测试,实现了通信领域的术语抽取。标注方法采用基于字的序列标注,精确率、召回率和F值分别达到80.9%、75.6%、78.2%,优于将词和词性等信息作为特征的方法,表明所提出的专利术语抽取方法是有效的。  相似文献   

7.
面向农民的问答系统问句处理研究*   总被引:1,自引:0,他引:1  
为提高农民获取信息的便利性,文章着重面向农民问答系统的开发,提出问答系统由知识库构建、问句处理、信息检索、答案抽取4个模块组成,其中问句处理是研究重点。在总结农民问句特点的基础上,提出基于疑问词和短语的问句分类方法,并在问句处理过程中采用去除客气词、建立针对非正式疑问词和无疑问词时的“特殊规则表”等方法,以有效地进行问句归类及关键词提取。同时利用所构建的“同义词扩展词表”扩充关键词,并设定不同的权重基准,为信息检索模块的处理奠定基础。  相似文献   

8.
当前,政府从各层面采取了一系列措施推进政务信息公开,已经取得了阶段性成果。实践工作中,政府网站平台发布的开放公文缺少主题分类、标注不一致间题成为阻碍政务信息开放利用的技术瓶颈。如何精准地、一致地对现有政府平台的海量政务公文进行主题分类标注,使其能为深度检索、推荐服务提供支撑,是函待解决的关键问题。在深入调研的基础上,一套自动化的针对政府开放公文的主题分类方法被提出,该方法以CNN-LSTM模型为基础,融合预训练BERT模型的语义特征,能精准的对政府开放公文进行主题分类。模型针对主题分类预测的整体准确度(Accuracy)为63.52%,最佳的F1-value可达到63.59%,为解决政务公文主题分类标注缺失问题提供了可行方案。该方法可以与信息检索、推荐结合,为公众提供更具精准度的政府公文服务。  相似文献   

9.
近年来,大量失真健康信息以微信公众号文章的方式在社交平台上广为传播,严重影响了用户对健康知识的获取和利用健康信息做医疗决策的效果。为了抑制失真健康信息的传播,有必要对失真健康信息进行自动化的识别与检测。本文以科普中国、丁香医生等公众号发布的健康类文章和经过辟谣的健康类文章为样本,通过分词、去停用词、语法特征提取和文本分类等步骤对失真健康信息进行识别,并通过分类准确率、精确率、召回率、训练时间等性能指标选出效果最佳的分类器。另外,针对文本分类中“一词多义”和“多词一义”的问题,本文通过LDA (latent Dirichlet allocation)主题分析提取文本的语义特征,进而提出一种“语法+语义”的特征提取方法,经过实验验证,各性能指标比基于语义的特征提取方法以及以往相关模型都有了一定的提升。本文为微信公众号文章中失真健康信息的识别提出了一种新的方法和工具,有利于对失真健康信息开展进一步的监测和治理。  相似文献   

10.
徐琳宏  丁堃  陈娜  李冰 《情报学报》2020,39(1):25-37
基于内容的引文情感分析克服了传统基于引用频次的引用同一化问题,是引文内容分析领域一个重要的研究热点。然而引文情感分析依赖于带标注的数据集,目前大规模高质量的引文情感语料资源匮乏,严重制约了该领域的研究。因此,本文在分析引文情感表达方式的基础上提出了一套适用于引文情感表示的标注体系,并详细阐述了语料库建设的技术和方法。采用人机结合的标注策略,借助完善的引文标注系统,构建了规模较大的中文文献的引文情感语料库。统计结果显示,在中文信息处理和科技管理领域情感褒义和贬义总的引用的占比分别为22%和6%,引文情感标注kappa值达到0.852,表明该语料库能够客观地反映作者的情感倾向性,可为论文评价、引文网络分析和情感分析等相关领域的研究提供数据支撑。  相似文献   

11.
Automatic question answering using the web: Beyond the Factoid   总被引:4,自引:0,他引:4  
In this paper we describe and evaluate a Question Answering (QA) system that goes beyond answering factoid questions. Our approach to QA assumes no restrictions on the type of questions that are handled, and no assumption that the answers to be provided are factoids. We present an unsupervised approach for collecting question and answer pairs from FAQ pages, which we use to collect a corpus of 1 million question/answer pairs from FAQ pages available on the Web. This corpus is used to train various statistical models employed by our QA system: a statistical chunker used to transform a natural language-posed question into a phrase-based query to be submitted for exact match to an off-the-shelf search engine; an answer/question translation model, used to assess the likelihood that a proposed answer is indeed an answer to the posed question; and an answer language model, used to assess the likelihood that a proposed answer is a well-formed answer. We evaluate our QA system in a modular fashion, by comparing the performance of baseline algorithms against our proposed algorithms for various modules in our QA system. The evaluation shows that our system achieves reasonable performance in terms of answer accuracy for a large variety of complex, non-factoid questions.  相似文献   

12.
[目的/意义]典籍是我国传统文化、思想和智慧的载体,结合数字人文的数据获取、标注和分析方法对典籍进行实体自动识别,对于后续应用研究具有重要意义。[方法/过程]基于经过自动分词与人工标注的25本先秦典籍构建古籍语料库,分别基于不同规模的语料库和Bi-LSTM、Bi-LSTM-Attention、Bi-LSTM-CRF、Bi-LSTM-CRF-Attention、Bi-RNN和Bi-RNN-CRF、BERT等7种深度学习模型,从中抽取构成历史事件的相应实体并进行效果对比。[结果/结论]在全部语料上训练得到的Bi-LSTM-Attention与Bi-RNN-CRF模型的准确率分别达到89.79%和89.33%,证实了深度学习应用于大规模文本数据集的可行性。  相似文献   

13.
How does the collaboration network of researchers coalesce around a scientific topic? What sort of social restructuring occurs as a new field develops? Previous empirical explorations of these questions have examined the evolution of co-authorship networks associated with several fields of science, each noting a characteristic shift in network structure as fields develop. Historically, however, such studies have tended to rely on manually annotated datasets and therefore only consider a handful of disciplines, calling into question the universality of the observed structural signature. To overcome this limitation and test the robustness of this phenomenon, we use a comprehensive dataset of over 189,000 scientific articles and develop a framework for partitioning articles and their authors into coherent, semantically related groups representing scientific fields of varying size and specificity. We then use the resulting population of fields to study the structure of evolving co-authorship networks. Consistent with earlier findings, we observe a global topological transition as the co-authorship networks coalesce from a disjointed aggregate into a dense giant connected component that dominates the network. We validate these results using a separate, complimentary corpus of scientific articles, and, overall, we find that the previously reported characteristic structural evolution of a scientific field's associated co-authorship network is robust across a large number of scientific fields of varying size, scope, and specificity. Additionally, the framework developed in this study may be used in other scientometric contexts in order to extend studies to compare across a larger range of scientific disciplines.  相似文献   

14.
Background: Question‐answering systems (or QA Systems) stand as a new alternative for Information Retrieval Systems. Most users frequently need to retrieve specific information about a factual question to obtain a whole document. Objectives: The study evaluates the efficiency of QA systems as terminological sources for physicians, specialised translators and users in general. It assesses the performance of one open‐domain QA system, START, and one restricted‐domain QA system, MedQA. Method: The study collected two hundred definitional questions (What is…?), either general or specialised, from the health website WebMD. Sources used by the open‐domain QA system, START, and the restricted‐domain QA system, MedQA, were studied to retrieve answers, and later a range of evaluation measures (precision, Mean Reciprocal Rank, Total Reciprocal Rank, First Hit Success) were applied to mark the quality of answers. Results: It was established that both systems are useful in the retrieval of valid definitional healthcare information, with an acceptable degree of coherent and precise responses from both. The answers supplied by MedQA were more reliable that those of START in the sense that they came from specialised clinical or academic sources, most of them showing links to further research articles. Conclusions: Results obtained show the potential of this type of tool in the more general realm of information access, and the retrieval of health information. They may be considered a good, reliable and reasonably precise alternative in alleviating the information overload. Both QA systems can help professionals and users can obtain healthcare information.  相似文献   

15.
[目的/意义] 在数字人文研究这一大趋势下,基于先秦古汉语语料库和条件随机场模型,构建古汉语地名自动识别模型。[方法/过程] 对《春秋左氏传》中的地名的内部和外部特征进行统计分析,构建模型的特征模板。在规模为187, 901个词汇的训练和测试语料上,对比条件随机场模型和最大熵模型的地名识别效果,把调和平均数为90.94%的条件随机场训练模型确定为最佳,作为本文所要构建的模型,并在《国语》语料上进行验证。[结果/结论] 在古汉语地名自动识别中,条件随机场模型优于最大熵模型,基于人工标注过的语料构建条件随机场自动识别模型能取得较好的识别效果。  相似文献   

16.
深入分析联合虚拟参考咨询系统(CVRS)分布式两级架构模式和咨询问题的处理流程,提出表单问题智能解答、自动应答机器人、知识库自动查重、实时咨询问题自动转表单咨询问题、从知识库批量提取FAQ问题和知识库自动分类等6项CVRS智能优化解决方案,并设计出以中文分词技术为核心,实现知识库全文检索和自动分类、实时交流记录和知识库内容文本摘要的技术路线。
  相似文献   

17.
[目的/意义] 构建与新时代相适应的人民日报分词语料库,为中文信息处理提供最新的精标注语料,也为从历时的角度分析现代汉语提供新的语言资源。[方法/过程] 在分析已有汉语分词语料库的基础上,描述所构建新时代人民日报语料库的数据源、标注规范和流程,通过构建分词自动标注模型测评语料库的性能,并与已有语料库进行对比。[结果/结论] 新时代人民日报语料库遵循现代汉语语料库基本加工规范,规模大,时间跨度长。选取其中的2018年1月部分,基于条件随机场构建分词模型,与1998年1月人民日报语料进行性能测评与对比,所得到的各项具体测评指标表明,新时代人民日报语料整体性能突出,1998年语料无法替代,当前构建该语料库非常必要。  相似文献   

18.
Through a number of media sources, today's consumers have unprecedented access to health information of varying reliability and authority. Empowered by this information, patients are more involved in their health care decisions and more willing to question physicians' advice. This poses a challenge for physicians who must now find time to read mass media health reports in addition to medical research. In order to help physicians with this task, librarians at the University of Manitoba Health Sciences Libraries created What Your Patient Reads - one-page summaries of health-related media reports supplemented with references to evidence-based medical literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号