Transformer Models for Brazilian Portuguese Question Generation: An Experimental Study
DOI:
https://doi.org/10.32473/flairs.37.1.135334Palabras clave:
Natural Language Processing, Transformers, Parallel multi-head attention mechanisms, Question generation, Encoder-decoder models, Brazilian Portuguese, SQuAD-v1.1 dataset, Experimental fine-tuningResumen
Unlike tasks such as translation or summarization, generating meaningful questions necessitates a profound understanding of context, semantics, and syntax. This complexity arises from the need to not only comprehend the given text comprehensively but also infer information gaps, identify relevant entities, and construct syntactically and semantically correct interrogative sentences. We address this challenge by proposing an experimental fine-tuning approach for encoder-decoder models (T5, FLAN-T5, and BART-PT) tailored explicitly for Brazilian Portuguese question generation. Our study involves fine-tuning these models on the SQUAD-v1.1 dataset and subsequent evaluation, also on SQUAD-v1.1. Through our experimental endeavors, BART returned a higher result in all the ROUGE metrics, as ROUGE-1 0.46, ROUGE-2 0.24, and ROUGE-L 0.43, suggesting a higher lexical similarity in the questions generated, and it is comparable to the results of the question generation task for the English language. We explored how these advancements can significantly enhance the precision and quality of the question generation task in Brazilian Portuguese, bridging the gap between training data and the intricacies of interrogative sentence construction.
Descargas
Publicado
Cómo citar
Número
Sección
Licencia
Derechos de autor 2024 Julia da Rocha Junqueira, Ulisses Brisolara Corrêa, Larissa Freitas
Esta obra está bajo una licencia internacional Creative Commons Atribución-NoComercial 4.0.