首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Question answering (QA) aims at finding exact answers to a user’s question from a large collection of documents. Most QA systems combine information retrieval with extraction techniques to identify a set of likely candidates and then utilize some ranking strategy to generate the final answers. This ranking process can be challenging, as it entails identifying the relevant answers amongst many irrelevant ones. This is more challenging in multi-strategy QA, in which multiple answering agents are used to extract answer candidates. As answer candidates come from different agents with different score distributions, how to merge answer candidates plays an important role in answer ranking. In this paper, we propose a unified probabilistic framework which combines multiple evidence to address challenges in answer ranking and answer merging. The hypotheses of the paper are that: (1) the framework effectively combines multiple evidence for identifying answer relevance and their correlation in answer ranking, (2) the framework supports answer merging on answer candidates returned by multiple extraction techniques, (3) the framework can support list questions as well as factoid questions, (4) the framework can be easily applied to a different QA system, and (5) the framework significantly improves performance of a QA system. An extensive set of experiments was done to support our hypotheses and demonstrate the effectiveness of the framework. All of the work substantially extends the preliminary research in Ko et al. (2007a). A probabilistic framework for answer selection in question answering. In: Proceedings of NAACL/HLT.  相似文献   

2.
This paper describes how questions can be characterized for question answering (QA) along different facets and focuses on questions that cannot be answered directly but can be divided into simpler ones so that they can be answered directly using existing QA capabilities. Since individual answers are composed to generate the final answer, we call this process as compositional QA. The goal of the proposed QA method is to answer a composite question by dividing it into atomic ones, instead of developing an entirely new method tailored for the new question type. A question is analyzed automatically to determine its class, and its sub-questions are sent to the relevant QA modules. Answers returned from the individual QA modules are composed based on the predetermined plan corresponding to the question type. The experimental results based on 615 questions show that the compositional QA approach outperforms the simple routing method by about 17%. Considering 115 composite questions only, the F-score was almost tripled from the baseline.  相似文献   

3.
With the advances in natural language processing (NLP) techniques and the need to deliver more fine-grained information or answers than a set of documents, various QA techniques have been developed corresponding to different question and answer types. A comprehensive QA system must be able to incorporate individual QA techniques as they are developed and integrate their functionality to maximize the system’s overall capability in handling increasingly diverse types of questions. To this end, a new QA method was developed to learn strategies for determining module invocation sequences and boosting answer weights for different types of questions. In this article, we examine the roles and effects of the answer verification and weight boosting method, which is the main core of the automatically generated strategy-driven QA framework, in comparison with a strategy-less, straightforward answer-merging approach and a strategy-driven but with manually constructed strategies.  相似文献   

4.
5.
【目的/意义】对Google、Bing、百度和搜狗四个中外文搜索引擎的自然语言问答能力进行评价,以揭示搜 索引擎正在向兼具搜索和自动问答功能的系统演进的趋势,对不同搜索引擎在不同类型问题上的自然语言回答能 力进行比较。【方法/过程】从文本检索会议和自然语言处理与中文计算会议的问答系统评测项目抽取了三类问题 (人物类、时间类、地点类),并进行搜索,以搜索引擎是否返回准确答案或包含正确答案的精选摘要为标准进行人 工评分,使用单因素方差分析和多重比较检验的方法进行比较分析。【结果/结论】主流的中外文搜索引擎均已具备 一定的自然语言问答能力,但仍存在较大的提升空间。Google总体表现最好,但对于人物类问题的回答能力弱于 搜狗。中外文搜索引擎在时间类问题上的表现均好于人物类和地点类问题。  相似文献   

6.
Question Answering (QA) systems are developed to answer human questions. In this paper, we have proposed a framework for answering definitional and factoid questions, enriched by machine learning and evolutionary methods and integrated in a web-based QA system. Our main purpose is to build new features by combining state-of-the-art features with arithmetic operators. To accomplish this goal, we have presented a Genetic Programming (GP)-based approach. The exact GP duty is to find the most promising formulas, made by a set of features and operators, which can accurately rank paragraphs, sentences, and words. We have also developed a QA system in order to test the new features. The input of our system is texts of documents retrieved by a search engine. To answer definitional questions, our system performs paragraph ranking and returns the most related paragraph. Moreover, in order to answer factoid questions, the system evaluates sentences of the filtered paragraphs ranked by the previous module of our framework. After this phase, the system extracts one or more words from the ranked sentences based on a set of hand-made patterns and ranks them to find the final answer. We have used Text Retrieval Conference (TREC) QA track questions, web data, and AQUAINT and AQUAINT-2 datasets for training and testing our system. Results show that the learned features can perform a better ranking in comparison with other evaluation formulas.  相似文献   

7.
Recently, question series have become one focus of research in question answering. These series are comprised of individual factoid, list, and “other” questions organized around a central topic, and represent abstractions of user–system dialogs. Existing evaluation methodologies have yet to catch up with this richer task model, as they fail to take into account contextual dependencies and different user behaviors. This paper presents a novel simulation-based methodology for evaluating answers to question series that addresses some of these shortcomings. Using this methodology, we examine two different behavior models: a “QA-styled” user and an “IR-styled” user. Results suggest that an off-the-shelf document retrieval system is competitive with state-of-the-art QA systems in this task. Advantages and limitations of evaluations based on user simulations are also discussed.  相似文献   

8.
Question answering websites are becoming an ever more popular knowledge sharing platform. On such websites, people may ask any type of question and then wait for someone else to answer the question. However, in this manner, askers may not obtain correct answers from appropriate experts. Recently, various approaches have been proposed to automatically find experts in question answering websites. In this paper, we propose a novel hybrid approach to effectively find experts for the category of the target question in question answering websites. Our approach considers user subject relevance, user reputation and authority of a category in finding experts. A user’s subject relevance denotes the relevance of a user’s domain knowledge to the target question. A user’s reputation is derived from the user’s historical question-answering records, while user authority is derived from link analysis. Moreover, our proposed approach has been extended to develop a question dependent approach that considers the relevance of historical questions to the target question in deriving user domain knowledge, reputation and authority. We used a dataset obtained from Yahoo! Answer Taiwan to evaluate our approach. Our experiment results show that our proposed methods outperform other conventional methods.  相似文献   

9.
相似度计算是自动问答领域里的重要内容。为了保证候选答案集中各答案能具备合理的排序,解决传统自动问答系统不能高效的综合评价相似度问题,提出利用综合指数法对关键词相似度、语义相似度等进行综合评价,得到综合相似度。并针对部分候选答案冗余信息过多,不利于答案提取的情况,设计了衰减相似度参数,用来解决句子冗余信息对答案提取的影响。实验结果表明,综合指数法的相似度算法能够有效的提高问答的正确率。  相似文献   

10.
Question answering systems assist users in satisfying their information needs more precisely by providing focused responses to their questions. Among the various systems developed for such a purpose, community-based question answering has recently received researchers’ attention due to the large amount of user-generated questions and answers in social question-and-answer platforms. Reusing such data sources requires an accurate information retrieval component enhanced by a question classifier. The question classification gives the system the possibility to have information about question categories to focus on questions and answers from relevant categories to the input question. In this paper, we propose a new method based on unsupervised Latent Dirichlet Allocation for classifying questions in community-based question answering. Our method first uses unsupervised topic modeling to extract topics from a large amount of unlabeled data. The learned topics are then used in the training phase to find their association with the available category labels in the training data. The category mixture of topics is finally used to predict the label of unseen data.  相似文献   

11.
王日花 《情报科学》2021,39(10):76-87
【目的/意义】解决自动问答系统构建过程中数据集构建成本高的问题,以及自动问答过程中仅考虑问题或 答案本身相关性的局限。【方法/过程】提出了一种融合标注问答库和社区问答数据的数据集构建方法,构建问题关 键词-问题-答案-答案簇多层异构网络模型,并给出了基于该模型的自动问答算法。获取图书馆语料进行处理作 为实验数据,将BERT-Cos、AINN、BiMPM模型作为对比对象进行了实验与分析。【结果/结论】通过实验得到了各 模型在图书馆自动问答任务上的效果,本文所提模型在各评价指标上均优于其他模型,模型准确率达87.85%。【创 新/局限】本文提出的多数据源融合数据集构建方法和自动问答模型在问答任务中相对于已有方法具有更好的表 现,同时根据模型效果分析给出用户提问词长建议。  相似文献   

12.
Answer selection is the most complex phase of a question answering (QA) system. To solve this task, typical approaches use unsupervised methods such as computing the similarity between query and answer, optionally exploiting advanced syntactic, semantic or logic representations.  相似文献   

13.
Machine reading comprehension (MRC) is a challenging task in the field of artificial intelligence. Most existing MRC works contain a semantic matching module, either explicitly or intrinsically, to determine whether a piece of context answers a question. However, there is scant work which systematically evaluates different paradigms using semantic matching in MRC. In this paper, we conduct a systematic empirical study on semantic matching. We formulate a two-stage framework which consists of a semantic matching model and a reading model, based on pre-trained language models. We compare and analyze the effectiveness and efficiency of using semantic matching modules with different setups on four types of MRC datasets. We verify that using semantic matching before a reading model improves both the effectiveness and efficiency of MRC. Compared with answering questions by extracting information from concise context, we observe that semantic matching yields more improvements for answering questions with noisy and adversarial context. Matching coarse-grained context to questions, e.g., paragraphs, is more effective than matching fine-grained context, e.g., sentences and spans. We also find that semantic matching is helpful for answering who/where/when/what/how/which questions, whereas it decreases the MRC performance on why questions. This may imply that semantic matching helps to answer a question whose necessary information can be retrieved from a single sentence. The above observations demonstrate the advantages and disadvantages of using semantic matching in different scenarios.  相似文献   

14.
15.
Generally, QA systems suffer from the structural difference where a question is composed of unstructured data, while its answer is made up of structured data in a Knowledge Graph (KG). To bridge this gap, most approaches use lexicons to cover data that are represented differently. However, the existing lexicons merely deal with representations for entity and relation mentions rather than consulting the comprehensive meaning of the question. To resolve this, we design a novel predicate constraints lexicon which restricts subject and object types for a predicate. It facilitates a comprehensive validation of a subject, predicate and object simultaneously. In this paper, we propose Predicate Constraints based Question Answering (PCQA). Our method prunes inappropriate entity/relation matchings to reduce search space, thus leading to an improvement of accuracy. Unlike the existing QA systems, we do not use any templates but generates query graphs to cover diverse types of questions. In query graph generation, we put more focus on matching relations rather than linking entities. This is well-suited to the use of predicate constraints. Our experimental results prove the validity of our approach and demonstrate a reasonable performance compared to other methods which target WebQuestions and Free917 benchmarks.  相似文献   

16.
Existing approaches in online health question answering (HQA) communities to identify the quality of answers either address it subjectively by human assessment or mainly using textual features. This process may be time-consuming and lose the semantic information of answers. We present an automatic approach for predicting answer quality that combines sentence-level semantics with textual and non-textual features in the context of online healthcare. First, we extend the knowledge adoption model (KAM) theory to obtain the six dimensions of quality measures for textual and non-textual features. Then we apply the Bidirectional Encoder Representations from Transformers (BERT) model for extracting semantic features. Next, the multi-dimensional features are processed for dimensionality reduction using linear discriminant analysis (LDA). Finally, we incorporate the preprocessed features into the proposed BK-XGBoost method to automatically predict the answer quality. The proposed method is validated on a real-world dataset with 48121 question-answer pairs crawled from the most popular online HQA communities in China. The experimental results indicate that our method competes against the baseline models on various evaluation metrics. We found up to 2.9% and 5.7% improvement in AUC value in comparison with BERT and XGBoost models respectively.  相似文献   

17.
This paper presents a roadmap of current promising research tracks in question answering with a focus on knowledge acquisition and reasoning. We show that many current techniques developed in the frame of text mining and natural language processing are ready to be integrated in question answering search systems. Their integration opens new avenues of research for factual answer finding and for advanced question answering. Advanced question answering refers to a situation where an understanding of the meaning of the question and the information source together with techniques for answer fusion and generation are needed.  相似文献   

18.
19.
With the noted popularity of social networking sites, people increasingly rely on these social networks to address their information needs. Although social question and answering is potentially an important venue seeking information online, it, unfortunately, suffers from a problem of low response rate, with the majority of questions receiving no response. To understand why the response rate of social question and answering is low and hopefully to increase it in the future, this research analyzes extrinsic factors that may influence the response probability of questions posted on Sina Weibo. We propose 17 influential factors from 2 different perspectives: the content of the question, and the characteristics of the questioner. We also train a prediction model to forecast a question's likelihood of being responded based on the proposed features We test our predictive model on more than 60,000 real-world questions posted on Weibo, which generate more than 600,000 responses. Findings show that a Weibo's question answerability is primarily contingent on the questioner versus the question. Our findings indicate that using appreciation emojis can increase a question's response probability, whereas the use of hashtags negatively influences the chances of receiving answers. Our contribution is in providing insights for the design and development of future social question and answering tools, as well as for enhancing social network users’ collaboration in supporting social information seeking activities.  相似文献   

20.
Optimal answerer ranking for new questions in community question answering   总被引:1,自引:1,他引:0  
Community question answering (CQA) services that enable users to ask and answer questions have become popular on the internet. However, lots of new questions usually cannot be resolved by appropriate answerers effectively. To address this question routing task, in this paper, we treat it as a ranking problem and rank the potential answerers by the probability that they are able to solve the given new question. We utilize tensor model and topic model simultaneously to extract latent semantic relations among asker, question and answerer. Then, we propose a learning procedure based on the above models to get optimal ranking of answerers for new questions by optimizing the multi-class AUC (Area Under the ROC Curve). Experimental results on two real-world CQA datasets show that the proposed method is able to predict appropriate answerers for new questions and outperforms other state-of-the-art approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号