Generating Distractors for Code Completion Problems: Can LLM Assist Instructors?
DOI:
https://doi.org/10.32473/flairs.38.1.138995Abstract
Code completion problems are an effective type of formative assessment; especially, when used to practice newly learned concepts or topics. While there is a growing body of research in computing education on the use of large language models (LLMs) to support learning content development, the use of LLMs for producing high-quality code completion problems has not yet been explored. In this paper, we analyze the capability of LLMs to generate effective distractors (i.e., plausible but incorrect options) and explanations for completion problems. We utilize common student misconceptions to improve the quality of the generated distractors. Our study suggests that LLMs are capable of generating reasonable distractors and explanations. At the same time, we identify a lack of a sufficiently granular taxonomy of common student misconceptions that would be needed for aligning the generated distractors with the common misconceptions and errors -- a gap that should be addressed in future work.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Mohammad Hassany, Kamil Akhuseyinoglu, Arun Balajiee Lekshmi Narayanan , Arav Agarwal, Jaromir Savelka, Peter Brusilovsky

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.