Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense Knowledge
Jiangjie Chen, Wei Shi, Ziquan Fu, Sijie Cheng, Lei Li, Yanghua Xiao
Main: Large Language Models Main-poster Paper
Poster Session 2: Large Language Models (Poster)
Conference Room: Frontenac Ballroom and Queen's Quay
Conference Time: July 10, 14:00-15:30 (EDT) (America/Toronto)
Global Time: July 10, Poster Session 2 (18:00-19:30 UTC)
Keywords:
interpretability/analysis
TLDR:
Large language models (LLMs) have been widely studied for their ability to store and utilize positive knowledge.
However, negative knowledge, such as "lions don't live in the ocean", is also ubiquitous in the world but rarely mentioned explicitly in text.
What do LLMs know about negative knowledge?...
You can open the
#paper-P2699
channel in a separate window.
Abstract:
Large language models (LLMs) have been widely studied for their ability to store and utilize positive knowledge.
However, negative knowledge, such as "lions don't live in the ocean", is also ubiquitous in the world but rarely mentioned explicitly in text.
What do LLMs know about negative knowledge?
This work examines the ability of LLMs on negative commonsense knowledge.
We design a constrained keywords-to-sentence generation task (CG) and a Boolean question answering task (QA) to probe LLMs.
Our experiments reveal that LLMs frequently fail to generate valid sentences grounded in negative commonsense knowledge, yet they can correctly answer polar yes-or-no questions.
We term this phenomenon the belief conflict of LLMs.
Our further analysis shows that statistical shortcuts and negation reporting bias from language modeling pre-training cause this conflict.