Assessing the Impact of Sequence Length Learning on Classification Tasks for Transformer Encoder Models

作者

  • Jean-Thomas Baillargeon Université Laval
  • Luc Lamontagne

##plugins.pubIds.doi.readerDisplayName##:

https://doi.org/10.32473/flairs.37.1.135283

关键词:

Transformer, Text Classification, Bias, Explainability, Data Augmentation

摘要

Classification algorithms using Transformer architectures can be affected by the sequence length learning problem whenever observations from different classes have a different length distribution. This problem causes models to use sequence length as a predictive feature instead of relying on important textual information. Although most public datasets are not affected by this problem, privately owned corpora for fields such as medicine and insurance may carry this data bias. The exploitation of this sequence length feature poses challenges throughout the value chain as these machine learning models can be used in critical applications. In this paper, we empirically expose this problem and present approaches to minimize its impacts.

##submission.downloads##

已出版

2024-05-13

栏目

Special Track: Applied Natural Language Processing