Fully Interpretable and Adjustable Model for Depression Diagnosis: A Qualitative Approach

Authors

  • Kuo Deng Berry College
  • Xiaomeng Ye Berry College
  • Kun Wang The University of Iowa
  • Angelina Pennino Berry College
  • Abigail Jarvis Berry College
  • Yola Hall Berry College

DOI:

https://doi.org/10.32473/flairs.38.1.138733

Keywords:

explainable AI, AI in healthcare, mental health, interpretable AI

Abstract

Recent advances in machine learning (ML) have enabled AI applications in mental disorder diagnosis, but many methods remain black-box or rely on post-hoc explanations which are not straightforward or actionable for mental health practitioners. Meanwhile, interpretable methods, such as k-nearest neighbors (k-NN) classification, struggle with complex or high-dimensional data. Moreover, there is a lack of study on users' real experience with interpretable AI. This study demonstrates a network-based k-NN model (NN-kNN) that combines the interpretability with the predictive power of neural networks. The model prediction can be fully explained in terms of activated features and neighboring cases. We experimented with the model to predict the risks of depression and interviewed practitioners in a qualitative study. The feedback of the practitioners emphasized the model's adaptability, integration of clinical expertise, and transparency in the diagnostic process, highlighting its potential to ethically improve the diagnostic precision and confidence of the practitioner.

Downloads

Published

14-05-2025

How to Cite

Deng, K., Ye, X., Wang, K., Pennino, A., Jarvis, A., & Hall, Y. (2025). Fully Interpretable and Adjustable Model for Depression Diagnosis: A Qualitative Approach. The International FLAIRS Conference Proceedings, 38(1). https://doi.org/10.32473/flairs.38.1.138733

Issue

Section

Special Track: Explainable, Fair, and Trustworthy AI