[SRW] Geometric Locality of Entity Embeddings in Masked Language Models
Masaki Sakata, Sho Yokoi, Benjamin Heinzerling, Kentaro Inui
Student Research Workshop Srw Paper
Session 3: Student Research Workshop (Poster)
Conference Room: Frontenac Ballroom and Queen's Quay
Conference Time: July 11, 09:00-10:30 (EDT) (America/Toronto)
Global Time: July 11, Session 3 (13:00-14:30 UTC)
TLDR:
This paper addresses whether masked language models can distinguish (named) entities in internal representations and investigates whether the model's internal representations of the same entity create a sufficiently compact cluster that is distinct from others.
The primary contributions of the paper...
You can open the
#paper-S48
channel in a separate window.
Abstract:
This paper addresses whether masked language models can distinguish (named) entities in internal representations and investigates whether the model's internal representations of the same entity create a sufficiently compact cluster that is distinct from others.
The primary contributions of the paper are two-fold.
First, we present a novel method for quantitatively assessing the degree to which pre-trained masked language models distinguish between entities in the internal representations.
Our approach investigates whether entities belonging to the same object are separated from other embeddings in the embedding space.
Second, we conducted experiments, and the findings revealed that internal representations of masked language models were able to distinguish entities from other concepts to a certain degree, even when there was variation in the surrounding context and mentions. (e.g., BERT could distinguish about 70% of the entities from other concepts.)
An implication of these results is the possibility that the distinct entities, as reflected in internal representations, are considered evidence of behavior that effectively processes entity strings.