The Larger they are, the Harder they Fail: Language Models do not Recognize Identifier Swaps in Python
Antonio Valerio Miceli Barone, Fazl Barez, Shay B. Cohen, Ioannis Konstas
Findings: Large Language Models Findings Paper
Session 4: Large Language Models (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 11, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 11, Session 4 (15:00-16:30 UTC)
Spotlight Session: Spotlight - Metropolitan Centre (Spotlight)
Conference Room: Metropolitan Centre
Conference Time: July 10, 19:00-21:00 (EDT) (America/Toronto)
Global Time: July 10, Spotlight Session (23:00-01:00 UTC)
Keywords:
scaling
TLDR:
Large Language Models (LLMs) have successfully been applied to code generation tasks, raising the question of how well these models understand programming.
Typical programming languages have invariances and equivariances in their semantics that human programmers intuitively understand and exploit, s...
You can open the
#paper-P1278
channel in a separate window.
Abstract:
Large Language Models (LLMs) have successfully been applied to code generation tasks, raising the question of how well these models understand programming.
Typical programming languages have invariances and equivariances in their semantics that human programmers intuitively understand and exploit, such as the (near) invariance to the renaming of identifiers. We show that LLMs not only fail to properly generate correct Python code when default function names are swapped, but some of them even become more confident in their incorrect predictions as the model size increases, an instance of the recently discovered phenomenon of Inverse Scaling, which runs contrary to the commonly observed trend of increasing prediction quality with increasing model size. Our findings indicate that, despite their astonishing typical-case performance, LLMs still lack a deep, abstract understanding of the content they manipulate, making them unsuitable for tasks that statistically deviate from their training data, and that mere scaling is not enough to achieve such capability.