TLMOTE: A Topic-based Language Modelling Approach for Text Oversampling

Authors

  • Arjun Choudhry Delhi Technological University
  • Seba Susan Delhi Technological University
  • Anmol Bansal Delhi Technological University
  • Anubhav Sharma Delhi Technological University

DOI:

https://doi.org/10.32473/flairs.v35i.130676

Keywords:

Text oversampling, TLMOTE, Language modelling, Topic modelling, Latent Dirichlet Allocation, Suggestion Mining, Sentiment analysis, SMS Spam detection

Abstract

Training machine learning and deep learning models on unbalanced datasets can lead to a bias portrayed by the models towards the majority classes. To tackle the problem of bias towards majority classes, researchers have presented various techniques to oversample the minority class data points. Most of the available state-of-the-art oversampling techniques generate artificial data points which cannot be comprehensibly understood by the reader, despite the synthetic data points generated being similar to the original minority class data points. In this work, we present Topic-based Language Modelling Approach for Text Oversampling (TLMOTE), a novel text oversampling technique for supervised learning from unbalanced datasets. TLMOTE improves upon previous approaches by generating data points which can be intelligibly understood by the reader, can relate to the main topics of the minority class, and introduces more variations to the synthetic data generated. We evaluate the efficacy of our approach on various tasks like Suggestion Mining SemEval 2019 Task 9 Subtasks A and B, SMS Spam Detection, and Sentiment Analysis. Experimental results verify that oversampling unbalanced datasets using TLMOTE yields a higher macro F1 score than with other oversampling techniques.

Downloads

Published

04-05-2022

How to Cite

Choudhry, A., Susan, S., Bansal, A., & Sharma, A. (2022). TLMOTE: A Topic-based Language Modelling Approach for Text Oversampling. The International FLAIRS Conference Proceedings, 35. https://doi.org/10.32473/flairs.v35i.130676

Issue

Section

Special Track: Applied Natural Language Processing