Beyond Size and Accuracy: The Impact of Model Compression on Fairness

Authors

DOI:

https://doi.org/10.32473/flairs.37.1.135617

Abstract

Model compression is increasingly popular in the domain of deep learning. When addressing practical problems that use complex neural network models, the availability of computational resources can pose a significant challenge. While smaller models may provide more efficient solutions, they often come at the cost of accuracy. To tackle this problem, researchers often use model compression techniques to transform large, complex models into simpler, faster models. These techniques aim to reduce the computational cost while minimizing the loss of accuracy. The majority of the model compression research focuses exclusively on model accuracy and size/speedup as performance metrics. This paper explores how different methods of model compression impact the fairness/bias of a model. We conducted our experiments using the COMPAS Recidivism Racial Bias dataset. We evaluated a variety of model compression techniques across multiple bias groups. Our findings indicate that the type and amount of compression have substantial impact on both the accuracy and fairness/bias of the model.

Downloads

Published

12-05-2024

How to Cite

Kamal, M., & Talbert, D. (2024). Beyond Size and Accuracy: The Impact of Model Compression on Fairness. The International FLAIRS Conference Proceedings, 37(1). https://doi.org/10.32473/flairs.37.1.135617

Issue

Section

Special Track: Explainable, Fair, and Trustworthy AI