Comparative Analysis of Transformers to Support Fine-Grained Emotion Detection in Short-Text Data

作者

  • Robert H. Frye University of North Carolina at Charlotte
  • David C. Wilson

##plugins.pubIds.doi.readerDisplayName##:

https://doi.org/10.32473/flairs.v35i.130612

关键词:

transformers, hyperparameters, emotion detection, fine-grained emotion detection, fine-tuning

摘要

Understanding a person’s mood and circumstances by way of sentiment or finer-grained emotion detection can play a significant role in AI systems and applications, such as in chat dialogue or reviews. Analysis of emotion from text typically requires specialized text or document understanding, and recent work has focused on transformer learning approaches. Common models of these transformers (e.g. BERT, RoBERTa, ELECTRA, XLM-R, and XLNet) have been pre-trained using longer texts of well-written English; however, many application contexts align more directly with social media content or have a shorter format more akin to social media, where texts often bend or violate standard language conventions. To understand the applicability and tradeoffs among common transformers within such contexts, our research investigates accuracy and efficiency considerations in fine-tuning transformers for granular emotion detection in short-text data. This paper presents a comparative study investigating the performance of five common transformers as applied in the specific context of multi-category emotion detection in short-text Twitter data. The study explores different considerations for hyperparameter settings in this context. Results show significant fine-tuning benefits in comparison to recommended baselines for the approaches and provide guidance for fine-tuning to support fine-grained emotion detection in short texts.

##submission.downloads##

已出版

2022-05-04

栏目

Special Track: Neural Networks and Data Mining