首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Question answering (QA) is the task of automatically answering a question posed in natural language. Currently, there exists several QA approaches, and, according to recent evaluation results, most of them are complementary. That is, different systems are relevant for different kinds of questions. Somehow, this fact indicates that a pertinent combination of various systems should allow to improve the individual results. This paper focuses on this problem, namely, the selection of the correct answer from a given set of responses corresponding to different QA systems. In particular, it proposes a supervised multi-stream approach that decides about the correctness of answers based on a set of features that describe: (i) the compatibility between question and answer types, (ii) the redundancy of answers across streams, as well as (iii) the overlap and non-overlap information between the question–answer pair and the support text. Experimental results are encouraging; evaluated over a set of 190 questions in Spanish and using answers from 17 different QA systems, our multi-stream QA approach could reach an estimated QA performance of 0.74, significantly outperforming the estimated performance from the best individual system (0.53) as well as the result from best traditional multi-stream QA approach (0.60).  相似文献   

2.
With the advances in natural language processing (NLP) techniques and the need to deliver more fine-grained information or answers than a set of documents, various QA techniques have been developed corresponding to different question and answer types. A comprehensive QA system must be able to incorporate individual QA techniques as they are developed and integrate their functionality to maximize the system’s overall capability in handling increasingly diverse types of questions. To this end, a new QA method was developed to learn strategies for determining module invocation sequences and boosting answer weights for different types of questions. In this article, we examine the roles and effects of the answer verification and weight boosting method, which is the main core of the automatically generated strategy-driven QA framework, in comparison with a strategy-less, straightforward answer-merging approach and a strategy-driven but with manually constructed strategies.  相似文献   

3.
4.
Question Answering (QA) systems are developed to answer human questions. In this paper, we have proposed a framework for answering definitional and factoid questions, enriched by machine learning and evolutionary methods and integrated in a web-based QA system. Our main purpose is to build new features by combining state-of-the-art features with arithmetic operators. To accomplish this goal, we have presented a Genetic Programming (GP)-based approach. The exact GP duty is to find the most promising formulas, made by a set of features and operators, which can accurately rank paragraphs, sentences, and words. We have also developed a QA system in order to test the new features. The input of our system is texts of documents retrieved by a search engine. To answer definitional questions, our system performs paragraph ranking and returns the most related paragraph. Moreover, in order to answer factoid questions, the system evaluates sentences of the filtered paragraphs ranked by the previous module of our framework. After this phase, the system extracts one or more words from the ranked sentences based on a set of hand-made patterns and ranks them to find the final answer. We have used Text Retrieval Conference (TREC) QA track questions, web data, and AQUAINT and AQUAINT-2 datasets for training and testing our system. Results show that the learned features can perform a better ranking in comparison with other evaluation formulas.  相似文献   

5.
This paper describes how questions can be characterized for question answering (QA) along different facets and focuses on questions that cannot be answered directly but can be divided into simpler ones so that they can be answered directly using existing QA capabilities. Since individual answers are composed to generate the final answer, we call this process as compositional QA. The goal of the proposed QA method is to answer a composite question by dividing it into atomic ones, instead of developing an entirely new method tailored for the new question type. A question is analyzed automatically to determine its class, and its sub-questions are sent to the relevant QA modules. Answers returned from the individual QA modules are composed based on the predetermined plan corresponding to the question type. The experimental results based on 615 questions show that the compositional QA approach outperforms the simple routing method by about 17%. Considering 115 composite questions only, the F-score was almost tripled from the baseline.  相似文献   

6.
Answer selection is the most complex phase of a question answering (QA) system. To solve this task, typical approaches use unsupervised methods such as computing the similarity between query and answer, optionally exploiting advanced syntactic, semantic or logic representations.  相似文献   

7.
Many existing biomedical extractive question answering methods are based on pre-trained models, which do not take full advantage of the hidden layer knowledge of pretrained models and do not consider span overlap between answers when predicting. To address these issues, we propose a new question answering model, called ALBERT with Dynamic Routing and Answer Voting (ADRAV). The ADRAV can reasonably utilize hidden layer knowledge through dynamic routing, and consider span similarity between answers through answer voting. To improve the performance of the model, we also carry out pre-fine-tuning, and add a dynamic parameter adjustment mechanism in the process of pre-fine-tuning. Experimental results show that our model achieves significant performance improvement with fewer parameters on BioASQ 4b, 5b, 6b, 9b, and outperforms SOTA baselines on BioASQ 4b, 6b.  相似文献   

8.
Machine reading comprehension (MRC) is a challenging task in the field of artificial intelligence. Most existing MRC works contain a semantic matching module, either explicitly or intrinsically, to determine whether a piece of context answers a question. However, there is scant work which systematically evaluates different paradigms using semantic matching in MRC. In this paper, we conduct a systematic empirical study on semantic matching. We formulate a two-stage framework which consists of a semantic matching model and a reading model, based on pre-trained language models. We compare and analyze the effectiveness and efficiency of using semantic matching modules with different setups on four types of MRC datasets. We verify that using semantic matching before a reading model improves both the effectiveness and efficiency of MRC. Compared with answering questions by extracting information from concise context, we observe that semantic matching yields more improvements for answering questions with noisy and adversarial context. Matching coarse-grained context to questions, e.g., paragraphs, is more effective than matching fine-grained context, e.g., sentences and spans. We also find that semantic matching is helpful for answering who/where/when/what/how/which questions, whereas it decreases the MRC performance on why questions. This may imply that semantic matching helps to answer a question whose necessary information can be retrieved from a single sentence. The above observations demonstrate the advantages and disadvantages of using semantic matching in different scenarios.  相似文献   

9.
Question answering websites are becoming an ever more popular knowledge sharing platform. On such websites, people may ask any type of question and then wait for someone else to answer the question. However, in this manner, askers may not obtain correct answers from appropriate experts. Recently, various approaches have been proposed to automatically find experts in question answering websites. In this paper, we propose a novel hybrid approach to effectively find experts for the category of the target question in question answering websites. Our approach considers user subject relevance, user reputation and authority of a category in finding experts. A user’s subject relevance denotes the relevance of a user’s domain knowledge to the target question. A user’s reputation is derived from the user’s historical question-answering records, while user authority is derived from link analysis. Moreover, our proposed approach has been extended to develop a question dependent approach that considers the relevance of historical questions to the target question in deriving user domain knowledge, reputation and authority. We used a dataset obtained from Yahoo! Answer Taiwan to evaluate our approach. Our experiment results show that our proposed methods outperform other conventional methods.  相似文献   

10.
Humans are able to reason from multiple sources to arrive at the correct answer. In the context of Multiple Choice Question Answering (MCQA), knowledge graphs can provide subgraphs based on different combinations of questions and answers, mimicking the way humans find answers. However, current research mainly focuses on independent reasoning on a single graph for each question–answer pair, lacking the ability for joint reasoning among all answer candidates. In this paper, we propose a novel method KMSQA, which leverages multiple subgraphs from the large knowledge graph ConceptNet to model the comprehensive reasoning process. We further encode the knowledge graphs with shared Graph Neural Networks (GNNs) and perform joint reasoning across multiple subgraphs. We evaluate our model on two common datasets: CommonsenseQA (CSQA) and OpenBookQA (OBQA). Our method achieves an exact match score of 74.53% on CSQA and 71.80% on OBQA, outperforming all eight baselines.  相似文献   

11.
自动问答系统在搜索引擎的基础上融入了自然语言的知识与应用,与传统的依靠关键字匹配的搜索引擎相比,能够更好地满足用户的检索需求。介绍了计算机操作系统自动问答系统模型,阐述了具体开发过程,设计并实现了基于计算机操作系统领域的自动问答系统,实践表明该系统能够较为准确地回答用户问题。  相似文献   

12.
The purpose of the current study is to identify the user criteria and data-driven features, both textual and non-textual, for assessing the quality of answers posted on social questioning and answering sites (social Q&A) across four different knowledge domains—Science, Technology, Art and Recreation. A comprehensive review of literature on quality assessment of information produced in social contexts was carried out to develop the theoretical framework for the current study. A total of 23 user criteria and 24 data features were proposed and tested with high-quality answers obtained from four social Q&A sites in Stack Exchange. Findings indicate that content-related criteria and user and review features were the most frequently used in quality assessments, while the importance of user criteria and data features was variable across the knowledge domains. In the Technology Q&A site containing mostly self-help questions, the utility class was the most frequently used group of criteria. The popularity of the socio-emotional class was more apparent in discussion-oriented topic categories such as Art and Recreation, where people seek others’ opinions or advice. Users of Art and Recreation Q&A sites in Stack Exchange appear to place more value on answerers’ efforts and time, good attitudes or manners, personal experience, and the same taste. The importance of user features and the emphasis on answerer's expertise on the Science Q&A site was observed. Examining the connection or gap between user quality criteria and data features across the knowledge domains could help to better understand users’ evaluation behaviors for their preferred answers, and identify the potential of social Q&A for user education/intervention in answer quality evaluation. This examination also offers practical guidance for designing more effective social Q&A platforms, considering how to customize community support systems, motivate contributions, and control content quality.  相似文献   

13.
开放领域的问答系统是自然语言处理领域中具有挑战性的研究方向.答案抽取是问答系统的关键,在基于模式匹配的答案抽取方法中,答案是借助于问题的答案模式抽取得到,因此,答案模式的评价对候选答案排序及答案的最终选择起着决定性的作用.参照传统的答案模式评价方法,提出一种改进的模式评价方法,分别在传统和改进两种答案模式评价方法下进行了答案抽取实验.实验结果表明应用改进的答案模式评价方法,答案抽取性能明显提高.  相似文献   

14.
As one of the challenging cross-modal tasks, video question answering (VideoQA) aims to fully understand video content and answer relevant questions. The mainstream approach in current work involves extracting appearance and motion features to characterize videos separately, ignoring the interactions between them and with the question. Furthermore, some crucial semantic interaction details between visual objects are overlooked. In this paper, we propose a novel Relation-aware Graph Reasoning (ReGR) framework for video question answering, which first combines appearance–motion and location–semantic multiple interaction relations between visual objects. For the interaction between appearance and motion, we design the Appearance–Motion Block, which is question-guided to capture the interdependence between appearance and motion. For the interaction between location and semantics, we design the Location–Semantic Block, which utilizes the constructed Multi-Relation Graph Attention Network to capture the geometric position and semantic interaction between objects. Finally, the question-driven Multi-Visual Fusion captures more accurate multimodal representations. Extensive experiments on three benchmark datasets, TGIF-QA, MSVD-QA, and MSRVTT-QA, demonstrate the superiority of our proposed ReGR compared to the state-of-the-art methods.  相似文献   

15.
相似度计算是自动问答领域里的重要内容。为了保证候选答案集中各答案能具备合理的排序,解决传统自动问答系统不能高效的综合评价相似度问题,提出利用综合指数法对关键词相似度、语义相似度等进行综合评价,得到综合相似度。并针对部分候选答案冗余信息过多,不利于答案提取的情况,设计了衰减相似度参数,用来解决句子冗余信息对答案提取的影响。实验结果表明,综合指数法的相似度算法能够有效的提高问答的正确率。  相似文献   

16.
王日花 《情报科学》2021,39(10):76-87
【目的/意义】解决自动问答系统构建过程中数据集构建成本高的问题,以及自动问答过程中仅考虑问题或 答案本身相关性的局限。【方法/过程】提出了一种融合标注问答库和社区问答数据的数据集构建方法,构建问题关 键词-问题-答案-答案簇多层异构网络模型,并给出了基于该模型的自动问答算法。获取图书馆语料进行处理作 为实验数据,将BERT-Cos、AINN、BiMPM模型作为对比对象进行了实验与分析。【结果/结论】通过实验得到了各 模型在图书馆自动问答任务上的效果,本文所提模型在各评价指标上均优于其他模型,模型准确率达87.85%。【创 新/局限】本文提出的多数据源融合数据集构建方法和自动问答模型在问答任务中相对于已有方法具有更好的表 现,同时根据模型效果分析给出用户提问词长建议。  相似文献   

17.
Optimal answerer ranking for new questions in community question answering   总被引:1,自引:1,他引:0  
Community question answering (CQA) services that enable users to ask and answer questions have become popular on the internet. However, lots of new questions usually cannot be resolved by appropriate answerers effectively. To address this question routing task, in this paper, we treat it as a ranking problem and rank the potential answerers by the probability that they are able to solve the given new question. We utilize tensor model and topic model simultaneously to extract latent semantic relations among asker, question and answerer. Then, we propose a learning procedure based on the above models to get optimal ranking of answerers for new questions by optimizing the multi-class AUC (Area Under the ROC Curve). Experimental results on two real-world CQA datasets show that the proposed method is able to predict appropriate answerers for new questions and outperforms other state-of-the-art approaches.  相似文献   

18.
【目的/意义】对Google、Bing、百度和搜狗四个中外文搜索引擎的自然语言问答能力进行评价,以揭示搜 索引擎正在向兼具搜索和自动问答功能的系统演进的趋势,对不同搜索引擎在不同类型问题上的自然语言回答能 力进行比较。【方法/过程】从文本检索会议和自然语言处理与中文计算会议的问答系统评测项目抽取了三类问题 (人物类、时间类、地点类),并进行搜索,以搜索引擎是否返回准确答案或包含正确答案的精选摘要为标准进行人 工评分,使用单因素方差分析和多重比较检验的方法进行比较分析。【结果/结论】主流的中外文搜索引擎均已具备 一定的自然语言问答能力,但仍存在较大的提升空间。Google总体表现最好,但对于人物类问题的回答能力弱于 搜狗。中外文搜索引擎在时间类问题上的表现均好于人物类和地点类问题。  相似文献   

19.
【目的/意义】旨在将社会化问答社区中碎片化的答案关联起来,并为用户提供不同主题的高质量答案和更 好的知识服务。【方法/过程】首先,本研究利用Doc2vec算法计算答案之间的语义相似度,并构建答案语义网络。其 次,利用Louvain算法对答案语义网络进行社区划分,并用TextRank算法抽取各个主题下文档的关键词,使用词云 对每个主题进行可视化展示。最后,利用PageRank算法对聚类后的答案语义网络进行排序,从而实现答案文档的 主题聚合和排序。【结果/结论】本研究使用“知乎”上的问答数据进行了实证研究。结果表明,所提出的答案聚合和 排序方法不仅能够向用户直观地展示答案之间的关联强度和各个主题答案的主要内容,还能够为用户提供分主题 的答案排序结果,自动为用户筛选高质量的答案。【创新/局限】创新性地提出了答案语义网络,并基于答案语义网 络,提出了一种集聚合、主题可视化和排序于一体的答案知识组织方法。  相似文献   

20.
Recently, question series have become one focus of research in question answering. These series are comprised of individual factoid, list, and “other” questions organized around a central topic, and represent abstractions of user–system dialogs. Existing evaluation methodologies have yet to catch up with this richer task model, as they fail to take into account contextual dependencies and different user behaviors. This paper presents a novel simulation-based methodology for evaluating answers to question series that addresses some of these shortcomings. Using this methodology, we examine two different behavior models: a “QA-styled” user and an “IR-styled” user. Results suggest that an off-the-shelf document retrieval system is competitive with state-of-the-art QA systems in this task. Advantages and limitations of evaluations based on user simulations are also discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号