Generative Adversarial learning with Negative Data Augmentation for Semi-supervised Text Classification
Keywords:NLP, Text Classification, Negative Data Augmentation, GANs
In recent years, semi-supervised generative adversarial networks (SS-GANs) models such as GAN-BERT have achieved promising results on the text classification task. One of the techniques used in these models to mitigate the generator from mode collapse is feature matching (FM). Although FM addresses some of the critical issues of SS-GANs, these models still suffer from mode collapse with missing coverage outside the data manifold. Moreover, FM loosely tries to match the distribution between the real data and the fake generated samples. By doing this, the generator can generate fake samples inside high-density regions in the data manifold, where the discriminator learns to misclassify them as out-of-data-manifold regions. In this work, we employ the negative data augmentation (NDA) technique, for the first time in text classification, to alleviate the mentioned problems. NDA is a unique way of producing out-of-distribution fake examples by applying mixup transformation on the fake samples and augmented real data. In our new model (NDA-GAN), we produce NDA samples by combining the generator's output with the contextual representation of the real data. As a result of the mixing, NDA samples are less likely to place in the high-density regions, and due to blending with real data representations, these samples reasonably preserve a close distance to the data manifold. Consequently, the NDA samples increase the discriminator's power to find the optimal decision boundary. Our experimental results demonstrate that the negative augmented samples improve the overall accuracy of our proposed model and make it more confident when detecting out-of-distribution samples.
How to Cite
Copyright (c) 2022 Shahriar Shayesteh, Diana Inkpen
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.