Transformer Models for Brazilian Portuguese Question Generation: An Experimental Study

Authors

Keywords:

Natural Language Processing, Transformers, Parallel multi-head attention mechanisms, Question generation, Encoder-decoder models, Brazilian Portuguese, SQuAD-v1.1 dataset, Experimental fine-tuning

Abstract

Unlike tasks such as translation or summarization, generating meaningful questions necessitates a profound understanding of context, semantics, and syntax. This complexity arises from the need to not only comprehend the given text comprehensively but also infer information gaps, identify relevant entities, and construct syntactically and semantically correct interrogative sentences. We address this challenge by proposing an experimental fine-tuning approach for encoder-decoder models (T5, FLAN-T5, and BART-PT) tailored explicitly for Brazilian Portuguese question generation. Our study involves fine-tuning these models on the SQUAD-v1.1 dataset and subsequent evaluation, also on SQUAD-v1.1. Through our experimental endeavors, BART returned a higher result in all the ROUGE metrics, as ROUGE-1 0.46, ROUGE-2 0.24, and ROUGE-L 0.43, suggesting a higher lexical similarity in the questions generated, and it is comparable to the results of the question generation task for the English language. We explored how these advancements can significantly enhance the precision and quality of the question generation task in Brazilian Portuguese, bridging the gap between training data and the intricacies of interrogative sentence construction.

Downloads

Published

13-05-2024

How to Cite

da Rocha Junqueira, J., Brisolara Corrêa, U., & Freitas, L. (2024). Transformer Models for Brazilian Portuguese Question Generation: An Experimental Study. The International FLAIRS Conference Proceedings, 37(1). Retrieved from https://journals.flvc.org/FLAIRS/article/view/135334

Issue

Section

Special Track: Applied Natural Language Processing