Assessing the Impact of Sequence Length Learning on Classification Tasks for Transformer Encoder Models

Autores/as

  • Jean-Thomas Baillargeon Université Laval
  • Luc Lamontagne

DOI:

https://doi.org/10.32473/flairs.37.1.135283

Palabras clave:

Transformer, Text Classification, Bias, Explainability, Data Augmentation

Resumen

Classification algorithms using Transformer architectures can be affected by the sequence length learning problem whenever observations from different classes have a different length distribution. This problem causes models to use sequence length as a predictive feature instead of relying on important textual information. Although most public datasets are not affected by this problem, privately owned corpora for fields such as medicine and insurance may carry this data bias. The exploitation of this sequence length feature poses challenges throughout the value chain as these machine learning models can be used in critical applications. In this paper, we empirically expose this problem and present approaches to minimize its impacts.

Descargas

Publicado

2024-05-13

Cómo citar

Baillargeon, J.-T., & Lamontagne, L. (2024). Assessing the Impact of Sequence Length Learning on Classification Tasks for Transformer Encoder Models. The International FLAIRS Conference Proceedings, 37(1). https://doi.org/10.32473/flairs.37.1.135283

Número

Sección

Special Track: Applied Natural Language Processing