Towards Trustworthy AI: Analyzing Model Uncertainty through Monte Carlo Dropout and Noise Injection
DOI:
https://doi.org/10.32473/flairs.38.1.138945Abstract
Autonomous vehicles require intelligent computer vision (CV) to perform critical navigational perception tasks. To achieve this, sensors such as camera, LiDAR and radar are utilized to provide data to artificial intelligence (AI) systems. Continuous monitoring of these intelligent CV systems is required to achieve a trustworthy AI system in a zero-trust (ZT) environment. This paper introduces a novel two-stage framework that provides a mechanism towards achieving this monitoring for a ZT environment. We made use of Monte Carlo (MC) dropout with One-Class Classification techniques to propose a framework towards trustworthy AI systems for AVs. Through extensive experimentation with varying noise levels and number of MC samples, we demonstrate that our framework achieves promising results in anomaly detection. In particular, our framework explores the trade-off between detection accuracy and computational overhead, where we achieved a high FPS of 46.4 with MC size of 5 while the accuracy is as low as 61.5%. This study would provide us with valuable insights for real-world AV applications.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Chern Chao Tai, Wesam Al Amiri, Abhijeet Solanki, Douglas Alan Talbert, Nan Guo, Syed Rafay Hasan

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.