Training language models for programming feedback using automated repair tools

C Koutcheme - International Conference on Artificial Intelligence in …, 2023 - Springer
International Conference on Artificial Intelligence in Education, 2023Springer
In introductory programming courses, automated repair tools (ARTs) are used to provide
feedback to students struggling with debugging. Most successful ARTs take advantage of
context-specific educational data to construct repairs to students' buggy codes. Recent work
in student program repair using large language models (LLMs) has also started to utilize
such data. An underexplored area in this field is the use of ARTs in combination with LLMs.
In this paper, we propose to transfer the repairing capabilities of existing ARTs to open large …
Abstract
In introductory programming courses, automated repair tools (ARTs) are used to provide feedback to students struggling with debugging. Most successful ARTs take advantage of context-specific educational data to construct repairs to students’ buggy codes. Recent work in student program repair using large language models (LLMs) has also started to utilize such data. An underexplored area in this field is the use of ARTs in combination with LLMs. In this paper, we propose to transfer the repairing capabilities of existing ARTs to open large language models by finetuning LLMs on ART corrections to buggy codes. We experiment with this approach using three large datasets of Python programs written by novices. Our results suggest that a finetuned LLM provides more reliable and higher-quality repairs than the repair tool used for finetuning the model. This opens venues for further deploying and using educational LLM-based repair techniques.
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果