Towards Trustworthy AI: Analyzing Model Uncertainty through Monte Carlo Dropout and Noise Injection

Authors

  • Chern Chao Tai Tennessee Technological University
  • Wesam Al Amiri
  • Abhijeet Solanki
  • Douglas Alan Talbert
  • Nan Guo
  • Syed Rafay Hasan

DOI:

https://doi.org/10.32473/flairs.38.1.138945

Abstract

Autonomous vehicles require intelligent computer vision (CV) to perform critical navigational perception tasks. To achieve this, sensors such as camera, LiDAR and radar are utilized to provide data to artificial intelligence (AI) systems. Continuous monitoring of these intelligent CV systems is required to achieve a trustworthy AI system in a zero-trust (ZT) environment. This paper introduces a novel two-stage framework that provides a mechanism towards achieving this monitoring for a ZT environment. We made use of Monte Carlo (MC) dropout with One-Class Classification techniques to propose a framework towards trustworthy AI systems for AVs. Through extensive experimentation with varying noise levels and number of MC samples, we demonstrate that our framework achieves promising results in anomaly detection. In particular, our framework explores the trade-off between detection accuracy and computational overhead, where we achieved a high FPS of 46.4 with MC size of 5 while the accuracy is as low as 61.5%. This study would provide us with valuable insights for real-world AV applications.

Downloads

Published

14-05-2025

How to Cite

Tai, C. C., Al Amiri, W., Solanki, A., Talbert, D. A., Guo, N., & Hasan, S. R. (2025). Towards Trustworthy AI: Analyzing Model Uncertainty through Monte Carlo Dropout and Noise Injection. The International FLAIRS Conference Proceedings, 38(1). https://doi.org/10.32473/flairs.38.1.138945

Issue

Section

Special Track: Explainable, Fair, and Trustworthy AI