Using Explainable AI to Measure Feature Contribution to Uncertainty

Authors

  • Katherine Elizabeth Brown Tennessee Tech University
  • Douglas A. Talbert Tennessee Technological University

DOI:

https://doi.org/10.32473/flairs.v35i.130662

Keywords:

deep learning, uncertainty quantification, explainable AI

Abstract

The application of artificial intelligence techniques in safety-critical domains such as medicine and self-driving vehicles has raised questions regarding its trustworthiness and reliability. One well-researched avenue for improving trust in and reliability of deep learning is uncertainty quantification. \textit{Uncertainty} measures the algorithm’s lack of trust in its predictions, and this information is important for practitioners using machine learning-based decision support. A variety of techniques exist that produce uncertainty estimations for machine learning predictions; however, very few techniques attempt to explain why that uncertainty exists in the prediction. Explainable Artificial Intelligence (XAI) is an umbrella term that encompasses techniques that provide some level of transparency to machine learning predictions. This can include information on which inputs contributed to or detracted from the algorithm’s prediction. This work focuses on applying existing XAI techniques to deep neural networks to understand how features contribute to epistemic uncertainty. Epistemic uncertainty is a measure of confidence in a prediction given the training data distribution upon which the neural network was trained. In this work, we apply common feature attribution XAI techniques to efficiently deduce explanations of epistemic uncertainty in deep neural networks.

Downloads

Published

04-05-2022

How to Cite

Brown, K. E., & Talbert, D. A. . (2022). Using Explainable AI to Measure Feature Contribution to Uncertainty. The International FLAIRS Conference Proceedings, 35. https://doi.org/10.32473/flairs.v35i.130662

Issue

Section

Special Track: Explainable, Fair, and Trustworthy AI