TSAL: Two steps Adversarial learning based domain adaptation

Authors

  • haidi Hasan Badr Researcher Assistance
  • Nayer Mahmoud Wanas
  • Magda Fayek

DOI:

https://doi.org/10.32473/flairs.v34i1.128510

Keywords:

Transfer learning, Adversarial learning unsupervised domain adaptation

Abstract

Since labeled data availability differs greatly across domains, Domain Adaptation focuses on learning in new and unfamiliar domains by reducing distribution divergence. Recent research suggests that the adversarial learning approach could be a promising way to achieve the domain adaptation objective. Adversarial learning is a strategy for learning domain-transferable features in robust deep networks. This paper introduces the TSAL paradigm, a two-step adversarial learning framework. It addresses the real-world problem of text classification, where source domain(s) has labeled data but target domain (s) has only unlabeled data. TSAL utilizes joint adversarial learning with class information and domain alignment deep network architecture to learn both domain-invariant and domain-specific features extractors. It consists of two training steps that are similar to the paradigm, in which pre-trained model weights are used as initialization for training with new data. TSAL’s two training phases, however, are based on the same data, not different data, as is the case with fine-tuning. Furthermore, TSAL only uses the learned domain-invariant feature extractor from the first training as an initialization for its peer in subsequent training. By doubling the training, TSAL can emphasize the leverage of the small unlabeled target domain and learn effectively what to share between various domains. A detailed analysis of many benchmark datasets reveals that our model consistently outperforms the prior art across a wide range of dataset distributions.

Downloads

Published

2021-04-18

How to Cite

Badr, haidi H., Wanas, N. M., & Fayek, M. (2021). TSAL: Two steps Adversarial learning based domain adaptation. The International FLAIRS Conference Proceedings, 34. https://doi.org/10.32473/flairs.v34i1.128510

Issue

Section

Main Track Proceedings