Measuring the Impact of Scene Level Objects: A Novel Method for Quantitative Explanations
DOI:
https://doi.org/10.32473/flairs.38.1.138922Keywords:
Explainability, Machine Learning, Black Box Model, Scene Level Objects, ContextAbstract
Although precision, recall, and other common metrics can provide a useful window into the performance of an object detection model, they lack a deeper view of the model’s decision process. Regardless of the quality of the training data and process, the features that an object detection model learns cannot be guaranteed. A model may learn a relationship between certain background context, i.e., scene level objects, and the presence of the labeled classes. Furthermore, standard performance metrics would not identify this phenomenon. This paper presents a black box explainability method for additional verification of object detection models by finding the impact of scene level objects on the identification of the classes within the image. By comparing the mean Average Precision (mAP) of a model on test data with and without certain scene level objects, the contributions of these objects to the model’s performance becomes clearer. This work presents two experiments to test the method. The experiment results provide quantitative explanations of the object detection model’s decision process, enabling a deeper understanding of the model’s performance.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Lynn Vonderhaar, Timothy Elvira, Omar Ochoa

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.