Beyond Size and Accuracy: The Impact of Model Compression on Fairness
DOI:
https://doi.org/10.32473/flairs.37.1.135617Abstract
Model compression is increasingly popular in the domain of deep learning. When addressing practical problems that use complex neural network models, the availability of computational resources can pose a significant challenge. While smaller models may provide more efficient solutions, they often come at the cost of accuracy. To tackle this problem, researchers often use model compression techniques to transform large, complex models into simpler, faster models. These techniques aim to reduce the computational cost while minimizing the loss of accuracy. The majority of the model compression research focuses exclusively on model accuracy and size/speedup as performance metrics. This paper explores how different methods of model compression impact the fairness/bias of a model. We conducted our experiments using the COMPAS Recidivism Racial Bias dataset. We evaluated a variety of model compression techniques across multiple bias groups. Our findings indicate that the type and amount of compression have substantial impact on both the accuracy and fairness/bias of the model.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Moumita Kamal, Douglas A. Talbert
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.