Learning to Rank with BERT for Argument Quality Evaluation
The task of argument quality ranking, which identifies the quality of free text arguments, remains, to this day, a challenge. While most state-of-the-art initiatives use point-wise ranking methods and predict an absolute quality score for each argument, we instead focus on learning how to order them by their relative convincingness, experimenting with several learning-to-rank methods for argument quality. We leverage BERT's powerful ability in building a representation of an argument, paired with learning-to-rank approaches (point-wise, pairwise, list-wise) to rank arguments according to their measure of convincingness. We also demonstrate how an ensemble of models trained with different ranking losses often improves the performance at identifying the most convincing arguments of a list. Finally, we compare BERT coupled with learning-to-rank methods to state-of-the-art approaches on all major argument quality datasets available for the ranking task, demonstrating how a learning-to-rank approach generally performs better at outlining the topmost convincing arguments.
How to Cite
Copyright (c) 2022 Charles-Olivier Favreau, Amal Zouaq, Sameer Bhatnagar
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.