首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Question answering (QA) aims at finding exact answers to a user’s question from a large collection of documents. Most QA systems combine information retrieval with extraction techniques to identify a set of likely candidates and then utilize some ranking strategy to generate the final answers. This ranking process can be challenging, as it entails identifying the relevant answers amongst many irrelevant ones. This is more challenging in multi-strategy QA, in which multiple answering agents are used to extract answer candidates. As answer candidates come from different agents with different score distributions, how to merge answer candidates plays an important role in answer ranking. In this paper, we propose a unified probabilistic framework which combines multiple evidence to address challenges in answer ranking and answer merging. The hypotheses of the paper are that: (1) the framework effectively combines multiple evidence for identifying answer relevance and their correlation in answer ranking, (2) the framework supports answer merging on answer candidates returned by multiple extraction techniques, (3) the framework can support list questions as well as factoid questions, (4) the framework can be easily applied to a different QA system, and (5) the framework significantly improves performance of a QA system. An extensive set of experiments was done to support our hypotheses and demonstrate the effectiveness of the framework. All of the work substantially extends the preliminary research in Ko et al. (2007a). A probabilistic framework for answer selection in question answering. In: Proceedings of NAACL/HLT.  相似文献   

2.
With the advances in natural language processing (NLP) techniques and the need to deliver more fine-grained information or answers than a set of documents, various QA techniques have been developed corresponding to different question and answer types. A comprehensive QA system must be able to incorporate individual QA techniques as they are developed and integrate their functionality to maximize the system’s overall capability in handling increasingly diverse types of questions. To this end, a new QA method was developed to learn strategies for determining module invocation sequences and boosting answer weights for different types of questions. In this article, we examine the roles and effects of the answer verification and weight boosting method, which is the main core of the automatically generated strategy-driven QA framework, in comparison with a strategy-less, straightforward answer-merging approach and a strategy-driven but with manually constructed strategies.  相似文献   

3.
4.
5.
Question answering (QA) is the task of automatically answering a question posed in natural language. Currently, there exists several QA approaches, and, according to recent evaluation results, most of them are complementary. That is, different systems are relevant for different kinds of questions. Somehow, this fact indicates that a pertinent combination of various systems should allow to improve the individual results. This paper focuses on this problem, namely, the selection of the correct answer from a given set of responses corresponding to different QA systems. In particular, it proposes a supervised multi-stream approach that decides about the correctness of answers based on a set of features that describe: (i) the compatibility between question and answer types, (ii) the redundancy of answers across streams, as well as (iii) the overlap and non-overlap information between the question–answer pair and the support text. Experimental results are encouraging; evaluated over a set of 190 questions in Spanish and using answers from 17 different QA systems, our multi-stream QA approach could reach an estimated QA performance of 0.74, significantly outperforming the estimated performance from the best individual system (0.53) as well as the result from best traditional multi-stream QA approach (0.60).  相似文献   

6.
Machine reading comprehension (MRC) is a challenging task in the field of artificial intelligence. Most existing MRC works contain a semantic matching module, either explicitly or intrinsically, to determine whether a piece of context answers a question. However, there is scant work which systematically evaluates different paradigms using semantic matching in MRC. In this paper, we conduct a systematic empirical study on semantic matching. We formulate a two-stage framework which consists of a semantic matching model and a reading model, based on pre-trained language models. We compare and analyze the effectiveness and efficiency of using semantic matching modules with different setups on four types of MRC datasets. We verify that using semantic matching before a reading model improves both the effectiveness and efficiency of MRC. Compared with answering questions by extracting information from concise context, we observe that semantic matching yields more improvements for answering questions with noisy and adversarial context. Matching coarse-grained context to questions, e.g., paragraphs, is more effective than matching fine-grained context, e.g., sentences and spans. We also find that semantic matching is helpful for answering who/where/when/what/how/which questions, whereas it decreases the MRC performance on why questions. This may imply that semantic matching helps to answer a question whose necessary information can be retrieved from a single sentence. The above observations demonstrate the advantages and disadvantages of using semantic matching in different scenarios.  相似文献   

7.
Recently, question series have become one focus of research in question answering. These series are comprised of individual factoid, list, and “other” questions organized around a central topic, and represent abstractions of user–system dialogs. Existing evaluation methodologies have yet to catch up with this richer task model, as they fail to take into account contextual dependencies and different user behaviors. This paper presents a novel simulation-based methodology for evaluating answers to question series that addresses some of these shortcomings. Using this methodology, we examine two different behavior models: a “QA-styled” user and an “IR-styled” user. Results suggest that an off-the-shelf document retrieval system is competitive with state-of-the-art QA systems in this task. Advantages and limitations of evaluations based on user simulations are also discussed.  相似文献   

8.
【目的/意义】对Google、Bing、百度和搜狗四个中外文搜索引擎的自然语言问答能力进行评价,以揭示搜 索引擎正在向兼具搜索和自动问答功能的系统演进的趋势,对不同搜索引擎在不同类型问题上的自然语言回答能 力进行比较。【方法/过程】从文本检索会议和自然语言处理与中文计算会议的问答系统评测项目抽取了三类问题 (人物类、时间类、地点类),并进行搜索,以搜索引擎是否返回准确答案或包含正确答案的精选摘要为标准进行人 工评分,使用单因素方差分析和多重比较检验的方法进行比较分析。【结果/结论】主流的中外文搜索引擎均已具备 一定的自然语言问答能力,但仍存在较大的提升空间。Google总体表现最好,但对于人物类问题的回答能力弱于 搜狗。中外文搜索引擎在时间类问题上的表现均好于人物类和地点类问题。  相似文献   

9.
Question classification (QC) involves classifying given question based on the expected answer type and is an important task in the Question Answering(QA) system. Existing approaches for question classification use full training dataset to fine-tune the models. It is expensive and requires more time to develop labelled datasets in huge size. Hence, there is a need to develop approaches that can achieve comparable or state of the art performance using limited training instances. In this paper, we propose an approach that uses data augmentation as a tool to generate additional training instances. We evaluate our proposed approach on two question classification datasets namely TREC and ICHI datasets. Experimental results show that our proposed approach reduces the requirement of labelled instances (a) up to 81.7% and achieves new state of the art accuracy of 98.11 on TREC dataset and (b) up to 75% and achieves 67.9 on ICHI dataset.  相似文献   

10.
Question categorization, which suggests one of a set of predefined categories to a user’s question according to the question’s topic or content, is a useful technique in user-interactive question answering systems. In this paper, we propose an automatic method for question categorization in a user-interactive question answering system. This method includes four steps: feature space construction, topic-wise words identification and weighting, semantic mapping, and similarity calculation. We firstly construct the feature space based on all accumulated questions and calculate the feature vector of each predefined category which contains certain accumulated questions. When a new question is posted, the semantic pattern of the question is used to identify and weigh the important words of the question. After that, the question is semantically mapped into the constructed feature space to enrich its representation. Finally, the similarity between the question and each category is calculated based on their feature vectors. The category with the highest similarity is assigned to the question. The experimental results show that our proposed method achieves good categorization precision and outperforms the traditional categorization methods on the selected test questions.  相似文献   

11.
Question Answering (QA) systems are developed to answer human questions. In this paper, we have proposed a framework for answering definitional and factoid questions, enriched by machine learning and evolutionary methods and integrated in a web-based QA system. Our main purpose is to build new features by combining state-of-the-art features with arithmetic operators. To accomplish this goal, we have presented a Genetic Programming (GP)-based approach. The exact GP duty is to find the most promising formulas, made by a set of features and operators, which can accurately rank paragraphs, sentences, and words. We have also developed a QA system in order to test the new features. The input of our system is texts of documents retrieved by a search engine. To answer definitional questions, our system performs paragraph ranking and returns the most related paragraph. Moreover, in order to answer factoid questions, the system evaluates sentences of the filtered paragraphs ranked by the previous module of our framework. After this phase, the system extracts one or more words from the ranked sentences based on a set of hand-made patterns and ranks them to find the final answer. We have used Text Retrieval Conference (TREC) QA track questions, web data, and AQUAINT and AQUAINT-2 datasets for training and testing our system. Results show that the learned features can perform a better ranking in comparison with other evaluation formulas.  相似文献   

12.
自动问答系统在搜索引擎的基础上融入了自然语言的知识与应用,与传统的依靠关键字匹配的搜索引擎相比,能够更好地满足用户的检索需求。介绍了计算机操作系统自动问答系统模型,阐述了具体开发过程,设计并实现了基于计算机操作系统领域的自动问答系统,实践表明该系统能够较为准确地回答用户问题。  相似文献   

13.
Recent studies point out that VQA models tend to rely on the language prior in the training data to answer the questions, which prevents the VQA model from generalization on the out-of-distribution test data. To address this problem, approaches are designed to reduce the language distribution prior effect by constructing negative image–question pairs, while they cannot provide the proper visual reason for answering the question. In this paper, we present a new debiasing framework for VQA by Learning to Sample paired image–question and Prompt for given question (LSP). Specifically, we construct the negative image–question pairs with certain sampling rate to prevent the model from overly relying on the visual shortcut content. Notably, question types provide a strong hint for answering the questions. We utilize question type to constrain the sampling process for negative question–image pairs, and further learn the question type-guided prompt for better question comprehension. Extensive experiments on two public benchmarks, VQA-CP v2 and VQA v2, demonstrate that our model achieves new state-of-the-art results in overall accuracy, i.e., 61.95% and 65.26%.  相似文献   

14.
Answer selection is the most complex phase of a question answering (QA) system. To solve this task, typical approaches use unsupervised methods such as computing the similarity between query and answer, optionally exploiting advanced syntactic, semantic or logic representations.  相似文献   

15.
Question answering systems assist users in satisfying their information needs more precisely by providing focused responses to their questions. Among the various systems developed for such a purpose, community-based question answering has recently received researchers’ attention due to the large amount of user-generated questions and answers in social question-and-answer platforms. Reusing such data sources requires an accurate information retrieval component enhanced by a question classifier. The question classification gives the system the possibility to have information about question categories to focus on questions and answers from relevant categories to the input question. In this paper, we propose a new method based on unsupervised Latent Dirichlet Allocation for classifying questions in community-based question answering. Our method first uses unsupervised topic modeling to extract topics from a large amount of unlabeled data. The learned topics are then used in the training phase to find their association with the available category labels in the training data. The category mixture of topics is finally used to predict the label of unseen data.  相似文献   

16.
Generally, QA systems suffer from the structural difference where a question is composed of unstructured data, while its answer is made up of structured data in a Knowledge Graph (KG). To bridge this gap, most approaches use lexicons to cover data that are represented differently. However, the existing lexicons merely deal with representations for entity and relation mentions rather than consulting the comprehensive meaning of the question. To resolve this, we design a novel predicate constraints lexicon which restricts subject and object types for a predicate. It facilitates a comprehensive validation of a subject, predicate and object simultaneously. In this paper, we propose Predicate Constraints based Question Answering (PCQA). Our method prunes inappropriate entity/relation matchings to reduce search space, thus leading to an improvement of accuracy. Unlike the existing QA systems, we do not use any templates but generates query graphs to cover diverse types of questions. In query graph generation, we put more focus on matching relations rather than linking entities. This is well-suited to the use of predicate constraints. Our experimental results prove the validity of our approach and demonstrate a reasonable performance compared to other methods which target WebQuestions and Free917 benchmarks.  相似文献   

17.
In this paper, we propose a generative model, the Topic-based User Interest (TUI) model, to capture the user interest in the User-Interactive Question Answering (UIQA) systems. Specifically, our method aims to model the user interest in the UIQA systems with latent topic method, and extract interests for users by mining the questions they asked, the categories they participated in and relevant answer providers. We apply the TUI model to the application of question recommendation, which automatically recommends to certain user appropriate questions he might be interested in. Data collection from Yahoo! Answers is used to evaluate the performance of the proposed model in question recommendation, and the experimental results show the effectiveness of our proposed model.  相似文献   

18.
Humans are able to reason from multiple sources to arrive at the correct answer. In the context of Multiple Choice Question Answering (MCQA), knowledge graphs can provide subgraphs based on different combinations of questions and answers, mimicking the way humans find answers. However, current research mainly focuses on independent reasoning on a single graph for each question–answer pair, lacking the ability for joint reasoning among all answer candidates. In this paper, we propose a novel method KMSQA, which leverages multiple subgraphs from the large knowledge graph ConceptNet to model the comprehensive reasoning process. We further encode the knowledge graphs with shared Graph Neural Networks (GNNs) and perform joint reasoning across multiple subgraphs. We evaluate our model on two common datasets: CommonsenseQA (CSQA) and OpenBookQA (OBQA). Our method achieves an exact match score of 74.53% on CSQA and 71.80% on OBQA, outperforming all eight baselines.  相似文献   

19.
Optimal answerer ranking for new questions in community question answering   总被引:1,自引:1,他引:0  
Community question answering (CQA) services that enable users to ask and answer questions have become popular on the internet. However, lots of new questions usually cannot be resolved by appropriate answerers effectively. To address this question routing task, in this paper, we treat it as a ranking problem and rank the potential answerers by the probability that they are able to solve the given new question. We utilize tensor model and topic model simultaneously to extract latent semantic relations among asker, question and answerer. Then, we propose a learning procedure based on the above models to get optimal ranking of answerers for new questions by optimizing the multi-class AUC (Area Under the ROC Curve). Experimental results on two real-world CQA datasets show that the proposed method is able to predict appropriate answerers for new questions and outperforms other state-of-the-art approaches.  相似文献   

20.
This paper presents a roadmap of current promising research tracks in question answering with a focus on knowledge acquisition and reasoning. We show that many current techniques developed in the frame of text mining and natural language processing are ready to be integrated in question answering search systems. Their integration opens new avenues of research for factual answer finding and for advanced question answering. Advanced question answering refers to a situation where an understanding of the meaning of the question and the information source together with techniques for answer fusion and generation are needed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号