Comparative Analysis of Transformers to Support Fine-Grained Emotion Detection in Short-Text Data
DOI:
https://doi.org/10.32473/flairs.v35i.130612Palavras-chave:
transformers, hyperparameters, emotion detection, fine-grained emotion detection, fine-tuningResumo
Understanding a person’s mood and circumstances by way of sentiment or finer-grained emotion detection can play a significant role in AI systems and applications, such as in chat dialogue or reviews. Analysis of emotion from text typically requires specialized text or document understanding, and recent work has focused on transformer learning approaches. Common models of these transformers (e.g. BERT, RoBERTa, ELECTRA, XLM-R, and XLNet) have been pre-trained using longer texts of well-written English; however, many application contexts align more directly with social media content or have a shorter format more akin to social media, where texts often bend or violate standard language conventions. To understand the applicability and tradeoffs among common transformers within such contexts, our research investigates accuracy and efficiency considerations in fine-tuning transformers for granular emotion detection in short-text data. This paper presents a comparative study investigating the performance of five common transformers as applied in the specific context of multi-category emotion detection in short-text Twitter data. The study explores different considerations for hyperparameter settings in this context. Results show significant fine-tuning benefits in comparison to recommended baselines for the approaches and provide guidance for fine-tuning to support fine-grained emotion detection in short texts.
Downloads
Publicado
Como Citar
Edição
Seção
Licença
Copyright (c) 2022 Robert H. Frye, David C. Wilson
Este trabalho está licenciado sob uma licença Creative Commons Attribution-NonCommercial 4.0 International License.