Explaining Competitive-Level Programming Solutions using LLMs
Jierui Li, Szymon Tworkowski, Yingying Wu, Raymond Mooney
1st Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2023) Long Paper
TLDR:
In this paper, we approach competitive-level programming problem-solving as a composite task of reasoning and code generation. We propose a novel method to automatically annotate natural language explanations to the \textless{}problem, solution\textgreater{} pairs. We show that despite poor performa
You can open the
#paper-ACL_80
channel in a separate window.
Abstract:
In this paper, we approach competitive-level programming problem-solving as a composite task of reasoning and code generation. We propose a novel method to automatically annotate natural language explanations to the \textless{}problem, solution\textgreater{} pairs. We show that despite poor performance in solving competitive-level programming problems, state-of-the-art LLMs exhibit a strong capacity in describing and explaining their solutions. Our explanation generation methodology can generate a structured solution explanation for the problem while containing the description and analysis. To evaluate the quality of the annotated explanations, we examine their effectiveness in two aspects: 1) satisfying the human programming expert who authored the oracle solution, and 2) aiding LLMs in solving problems more effectively. The experimental results on the CodeContests dataset demonstrate that while LLM GPT3.5's and GPT-4's abilities in describing the solution are comparable, GPT-4 shows a better understanding of the key idea behind the solution.