Transformer Models for Brazilian Portuguese Question Generation: An Experimental Study
DOI:
https://doi.org/10.32473/flairs.37.1.135334Schlagworte:
Natural Language Processing, Transformers, Parallel multi-head attention mechanisms, Question generation, Encoder-decoder models, Brazilian Portuguese, SQuAD-v1.1 dataset, Experimental fine-tuningAbstract
Unlike tasks such as translation or summarization, generating meaningful questions necessitates a profound understanding of context, semantics, and syntax. This complexity arises from the need to not only comprehend the given text comprehensively but also infer information gaps, identify relevant entities, and construct syntactically and semantically correct interrogative sentences. We address this challenge by proposing an experimental fine-tuning approach for encoder-decoder models (T5, FLAN-T5, and BART-PT) tailored explicitly for Brazilian Portuguese question generation. Our study involves fine-tuning these models on the SQUAD-v1.1 dataset and subsequent evaluation, also on SQUAD-v1.1. Through our experimental endeavors, BART returned a higher result in all the ROUGE metrics, as ROUGE-1 0.46, ROUGE-2 0.24, and ROUGE-L 0.43, suggesting a higher lexical similarity in the questions generated, and it is comparable to the results of the question generation task for the English language. We explored how these advancements can significantly enhance the precision and quality of the question generation task in Brazilian Portuguese, bridging the gap between training data and the intricacies of interrogative sentence construction.
Downloads
Veröffentlicht
Zitationsvorschlag
Ausgabe
Rubrik
Lizenz
Copyright (c) 2024 Julia da Rocha Junqueira, Ulisses Brisolara Corrêa, Larissa Freitas
Dieses Werk steht unter der Lizenz Creative Commons Namensnennung - Nicht-kommerziell 4.0 International.