Measuring Interpretability: A systematic literature review of interpretability measures in artificial intelligence

Authors

  • Prateek Goel Drexel University
  • Rosina Weber Drexel University

DOI:

https://doi.org/10.32473/flairs.38.1.138992

Keywords:

Interpretability, Literature survey, Objective measurement

Abstract

Advancement in any field requires approaches for measurement. Failure to build such approaches inhibits improvements
within the field. In the context of interpretability in Artificial Intelligence (AI), a lack of widely adopted evaluation
and measurement approaches prevents its advance. While some approaches in literature propose ways to measure interpretability,
no consensus exists on objective measurement of interpretability. To advance the state-of-the-art, a clear understanding
of these approaches is essential. This paper conducts a systematic review of existing approaches that propose to measure or quantify interpretability and its aspects. The resulting analysis of this review identifies important aspects to consider when measuring interpretability. We found that no approaches directly propose to measure interpretability but instead quantify aspects associated with interpretability. We identify four of these aspects in result of this review.

Downloads

Published

14-05-2025

How to Cite

Goel, P., & Weber, R. (2025). Measuring Interpretability: A systematic literature review of interpretability measures in artificial intelligence. The International FLAIRS Conference Proceedings, 38(1). https://doi.org/10.32473/flairs.38.1.138992

Issue

Section

Special Track: Explainable, Fair, and Trustworthy AI