Towards Improving Open Student Answer Assessment using Pretrained Transformers

Auteurs-es

  • Nisrine Ait Khayi The University of Memphis
  • Vasile Rus
  • Lasang Tamang

DOI :

https://doi.org/10.32473/flairs.v34i1.128483

Mots-clés :

Transfer Learning, Student Answers Assessment, Pretrained Transformers, Dialog Based Systems

Résumé

The transfer learning pretraining-finetuning  paradigm has revolutionized the natural language processing field yielding state-of the art results in  several subfields such as text classification and question answering. However, little work has been done investigating pretrained language models for the  open student answer assessment task. In this paper, we fine tune pretrained T5, BERT, RoBERTa, DistilBERT, ALBERT and XLNet models on the DT-Grade dataset which contains freely generated (or open) student answers together with judgment of their correctness. The experimental results demonstrated the effectiveness of these models based on the transfer learning pretraining-finetuning paradigm for open student answer assessment. An improvement of 8%-15% in accuracy was obtained over previous methods. Particularly, a T5 based method led to state-of-the-art results with an accuracy and F1 score of 0.88.

Téléchargements

Publié-e

2021-04-18

Comment citer

Ait Khayi, N., Rus, V., & Tamang, L. (2021). Towards Improving Open Student Answer Assessment using Pretrained Transformers. The International FLAIRS Conference Proceedings, 34. https://doi.org/10.32473/flairs.v34i1.128483

Numéro

Rubrique

Special Track: Intelligent Learning Technologies