Improving Multi-hop Logical Reasoning in Small LMs with LoRA Training
DOI:
https://doi.org/10.32473/flairs.38.1.138643Keywords:
logical reasoning, multi-hop reasoning, Navset, LoRA, small language modelsAbstract
Language models show increasing performance in reasoning tasks. However, logical reasoning in complex tasks remains a challenge. This challenge is more apparent when resources are limited, such as using smaller language models or small datasets for knowledge extraction. How can language models be used in this case to generalize and solve complex logical reasoning tasks? In this work, we show that LoRA training of language models with small datasets can improve logical reasoning and transferability for fact extraction. In our tests, we extracted facts with CoT-prompting to use them as input to the rule set. We explored our experiments with the StepGame, Navset, Comparison, and TriviaQA datasets and evaluated our results with precision, recall, and accuracy metrics. We compared the results against untrained language models. Our results show that LoRA training improves logical reasoning even for out-of-distribution samples.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Onur Bilgin, Abdullah As Sami, Suraj Kumar, John Licato

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.