首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability
Institution:1. Julius-Maximilians-Universität Würzburg, Würzburg, Germany;2. Otto-von-Guericke-Universität Magdeburg, Magdeburg, Germany;3. TU Dortmund University, Otto-Hahn-Str. 14, 44227 Dortmund, Germany
Abstract:Machine learning algorithms enable advanced decision making in contemporary intelligent systems. Research indicates that there is a tradeoff between their model performance and explainability. Machine learning models with higher performance are often based on more complex algorithms and therefore lack explainability and vice versa. However, there is little to no empirical evidence of this tradeoff from an end user perspective. We aim to provide empirical evidence by conducting two user experiments. Using two distinct datasets, we first measure the tradeoff for five common classes of machine learning algorithms. Second, we address the problem of end user perceptions of explainable artificial intelligence augmentations aimed at increasing the understanding of the decision logic of high-performing complex models. Our results diverge from the widespread assumption of a tradeoff curve and indicate that the tradeoff between model performance and explainability is much less gradual in the end user’s perception. This is a stark contrast to assumed inherent model interpretability. Further, we found the tradeoff to be situational for example due to data complexity. Results of our second experiment show that while explainable artificial intelligence augmentations can be used to increase explainability, the type of explanation plays an essential role in end user perception.
Keywords:Machine learning  Explainability  Performance  Tradeoff  XAI
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号