首页 | 本学科首页   官方微博 | 高级检索  
     


Evaluating epistemic uncertainty under incomplete assessments
Authors:Mark Baillie  Leif Azzopardi  Ian Ruthven
Affiliation:1. Department of Computer and Information Sciences, University of Strathclyde, Glasgow, Scotland, UK;2. Department of Computing Science, University of Glasgow, Glasgow, Scotland, UK
Abstract:The thesis of this study is to propose an extended methodology for laboratory based Information Retrieval evaluation under incomplete relevance assessments. This new methodology aims to identify potential uncertainty during system comparison that may result from incompleteness. The adoption of this methodology is advantageous, because the detection of epistemic uncertainty – the amount of knowledge (or ignorance) we have about the estimate of a system’s performance – during the evaluation process can guide and direct researchers when evaluating new systems over existing and future test collections. Across a series of experiments we demonstrate how this methodology can lead towards a finer grained analysis of systems. In particular, we show through experimentation how the current practice in Information Retrieval evaluation of using a measurement depth larger than the pooling depth increases uncertainty during system comparison.
Keywords:Information Retrieval evaluation   Incompleteness   System comparison   Test collections
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号