The Magic of IF: Investigating Causal Reasoning Abilities in Large Language Models of Code

Xiao Liu, Da Yin, Chen Zhang, Yansong Feng, Dongyan Zhao

1st Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2023) Long Paper

TLDR: Causal reasoning, the ability to identify cause-and-effect relationship, is crucial in human thinking. Although large language models (LLMs) succeed in many NLP tasks, it is still challenging for them to conduct complex causal reasoning like abductive reasoning and counterfactual reasoning. Given th
You can open the #paper-ACL_93 channel in a separate window.
Abstract: Causal reasoning, the ability to identify cause-and-effect relationship, is crucial in human thinking. Although large language models (LLMs) succeed in many NLP tasks, it is still challenging for them to conduct complex causal reasoning like abductive reasoning and counterfactual reasoning. Given the fact that programming code may express causal relations more often and explicitly with conditional statements like ``if``, we want to explore whether Code-LLMs acquire better causal reasoning abilities. Our experiments show that compared to text-only LLMs, Code-LLMs with code prompts are better causal reasoners. We further intervene on the prompts from different aspects, and discover that the key point is the programming structure.