Teaching Large Language Models to Self-Debug

Xinyun Chen, Maxwell Lin, Nathanael Schaerli, Denny Zhou

1st Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2023) Long Paper

TLDR: Large language models (LLMs) have achieved impressive performance on code generation. However, for complex programming tasks, generating the correct solution in one go becomes challenging. In this work, we propose self-debugging, which teaches a large language model to debug its predicted program vi
You can open the #paper-ACL_71 channel in a separate window.
Abstract: Large language models (LLMs) have achieved impressive performance on code generation. However, for complex programming tasks, generating the correct solution in one go becomes challenging. In this work, we propose self-debugging, which teaches a large language model to debug its predicted program via few-shot demonstrations. In particular, we demonstrate that self-debugging can teach the large language model to perform rubber duck debugging; i.e., without any feedback on the code correctness or error messages, the model is able to identify its mistakes by explaining the generated code in natural language. Self-debugging achieves the state-of-the-art performance on several code generation benchmarks, including the Spider dataset for text-to-SQL generation, TransCoder for C++-to-Python translation, and MBPP for text-to-Python generation. On the Spider benchmark where there are no unit tests to verify the correctness of predictions, self-debugging with code explanation consistently improves the baseline by 2-3\%, and improves the prediction accuracy on problems of the hardest label by 9\%\$. On TransCoder and MBPP where unit tests are available, self-debugging can improve the baseline accuracy by 12\%. Meanwhile, by leveraging feedback messages and reusing failed predictions, self-debugging notably improves sample efficiency, and can match or outperform baseline models that generate more than 10x candidate programs.