[SRW] Can LMs Store and Retrieve 1-to-N Relational Knowledge?
Haruki Nagasawa, Benjamin Heinzerling, Kazuma Kokuta, Kentaro Inui
Student Research Workshop Srw Paper
Session 6: Student Research Workshop (Poster)
Conference Room: Frontenac Ballroom and Queen's Quay
Conference Time: July 12, 09:00-10:30 (EDT) (America/Toronto)
Global Time: July 12, Session 6 (13:00-14:30 UTC)
TLDR:
It has been suggested that pretrained language models can be viewed as knowledge bases.
One of the prerequisites for using language models as knowledge bases is how accurately they can store and retrieve world knowledge. It is already revealed that language models can store much 1-to-1 relational kn...
You can open the
#paper-S47
channel in a separate window.
Abstract:
It has been suggested that pretrained language models can be viewed as knowledge bases.
One of the prerequisites for using language models as knowledge bases is how accurately they can store and retrieve world knowledge. It is already revealed that language models can store much 1-to-1 relational knowledge, such as ''country and its capital,'' with high memorization accuracy.
On the other hand, world knowledge includes not only 1-to-1 but also 1-to-N relational knowledge, such as ''parent and children.''
However, it is not clear how accurately language models can handle 1-to-N relational knowledge.
To investigate language models' abilities toward 1-to-N relational knowledge, we start by designing the problem settings. Specifically, we organize the character of 1-to-N relational knowledge and define two essential skills: (i) memorizing multiple objects individually and (ii) retrieving multiple stored objects without excesses or deficiencies at once. We inspect LMs' ability to handle 1-to-N relational knowledge on the controlled synthesized data.
As a result, we report that it is possible to memorize multiple objects with high accuracy, but generalizing the retrieval ability (expressly, enumeration) is challenging.