Automated Assessment of Student Self-explanation During Source Code Comprehension

Authors

  • Jeevan Chapagain University of Memphis
  • Lasang Tamang
  • Rabin Banjade
  • Priti Oli
  • Vasile Rus

DOI:

https://doi.org/10.32473/flairs.v35i.130540

Keywords:

self-explanation, source code comprehension, semantic similarity

Abstract

This paper presents a novel method to automatically assess self-explanations generated by students during code comprehension activities. The self-explanations are produced in the context of an online learning environment that asks students to freely explain Java code examples line-by-line. We explored a number of models consisting of textual features in conjunction with machine learning algorithms such as Support Vector Regression (SVR), Decision Trees (DT), and Random Forests (RF). Support Vector Regression (SVR) performed best having a correlation score with human judgments of 0.7088. The best model used a combination of features such as semantic measures obtained using a Sentence BERT pre-trained model and from previously developed semantic algorithms used in a state-of-the-art intelligent tutoring system.

Downloads

Published

04-05-2022

How to Cite

Chapagain, J., Tamang, L. ., Banjade, R., Oli, P., & Rus, V. (2022). Automated Assessment of Student Self-explanation During Source Code Comprehension. The International FLAIRS Conference Proceedings, 35. https://doi.org/10.32473/flairs.v35i.130540

Issue

Section

Main Track Proceedings