An Exploration of Consistency Learning with Data Augmentation

Authors

  • Connor Shorten Florida Atlantic University
  • Taghi M. Khoshgoftaar

DOI:

https://doi.org/10.32473/flairs.v35i.130669

Abstract

Deep Learning has achieved remarkable success with Supervised Learning. Nearly all of these successes require very large manually annotated datasets. Data augmentation has enabled Supervised Learning with less labeled data, while avoiding the pitfalls of overfitting. However, Supervised Learning still fails to be Robust, making different predictions for original and augmented data points. We study the addition of a Consistency Loss between representations of original and augmented data points. Although this offers additional structure for invariance to augmentation, it may fall into the trap of representation collapse. Representation collapse describes the solution of mapping every input to a constant output, thus cheating to solve the consistency task. Many techniques have been developed to avoid representation collapse such as stopping gradients, entropy penalties, and applying the Consis- tency Loss at intermediate layers. We provide an analysis of these techniques in interaction with Supervised Learning for the CIFAR-10 image classification dataset. Our consistency learning models achieve a 1.7% absolute improvement on the CIFAR-10 original test set over the supervised baseline. More interestingly, we are able to dramatically reduce our proposed Distributional Distance metric with the Consistency Loss. Distributional Distance provides a more fine- grained analysis of the invariance to corrupted images. Readers will understand the practice of adding a Consistency Loss to improve Robustness in Deep Learning.

Downloads

Published

04-05-2022

How to Cite

Shorten, C., & Khoshgoftaar, T. M. (2022). An Exploration of Consistency Learning with Data Augmentation. The International FLAIRS Conference Proceedings, 35. https://doi.org/10.32473/flairs.v35i.130669

Issue

Section

Main Track Proceedings