Replicate and Compare with Humans: LLMs Represent Partial Semantic Knowledge in Pronoun Interpretation
Suet-ying Lam, Qingcheng Zeng, Kexun Zhang, Chenyu You, Rob Voigt
4th Workshop on Computational Approaches to Discourse Regular long Paper
TLDR:
While a large body of literature suggests that large language models (LLMs) acquire rich linguistic representations, little is known about whether they adapt to linguistic biases in a human-like way. The present study probes this question by comparing InstructGPT's performance on learning referentia
You can open the
#paper-CODI_27
channel in a separate window.
Abstract:
While a large body of literature suggests that large language models (LLMs) acquire rich linguistic representations, little is known about whether they adapt to linguistic biases in a human-like way. The present study probes this question by comparing InstructGPT's performance on learning referential biases with results from real psycholinguistic experiments. Recent psycholinguistic studies suggest that humans adapt their referential biases with exposure to referential patterns; closely replicating three relevant psycholinguistic experiments from Johnson and Arnold (2022) in an in-context learning (ICL) framework, we found that InstructGPT adapts its pronominal interpretations in response to the frequency of referential patterns in the local discourse, though in a limited fashion: adaptation was only observed relative to syntactic but not semantic biases. Our results provide further evidence that contemporary LLMs discourse representations are sensitive to syntactic patterns in the local context but less so to semantic patterns.