An Exploration of Consistency Learning with Data Augmentation
DOI :
https://doi.org/10.32473/flairs.v35i.130669Résumé
Deep Learning has achieved remarkable success with Supervised Learning. Nearly all of these successes require very large manually annotated datasets. Data augmentation has enabled Supervised Learning with less labeled data, while avoiding the pitfalls of overfitting. However, Supervised Learning still fails to be Robust, making different predictions for original and augmented data points. We study the addition of a Consistency Loss between representations of original and augmented data points. Although this offers additional structure for invariance to augmentation, it may fall into the trap of representation collapse. Representation collapse describes the solution of mapping every input to a constant output, thus cheating to solve the consistency task. Many techniques have been developed to avoid representation collapse such as stopping gradients, entropy penalties, and applying the Consis- tency Loss at intermediate layers. We provide an analysis of these techniques in interaction with Supervised Learning for the CIFAR-10 image classification dataset. Our consistency learning models achieve a 1.7% absolute improvement on the CIFAR-10 original test set over the supervised baseline. More interestingly, we are able to dramatically reduce our proposed Distributional Distance metric with the Consistency Loss. Distributional Distance provides a more fine- grained analysis of the invariance to corrupted images. Readers will understand the practice of adding a Consistency Loss to improve Robustness in Deep Learning.
Téléchargements
Publié-e
Comment citer
Numéro
Rubrique
Licence
© Connor Shorten, Taghi M. Khoshgoftaar 2022

Cette œuvre est sous licence Creative Commons Attribution - Pas d'Utilisation Commerciale 4.0 International.