Unsupervised Semantic Variation Prediction using the Distribution of Sibling Embeddings
Taichi Aida, Danushka Bollegala
Findings: Linguistic Theories, Cognitive Modeling, and Psycholinguistics Findings Paper
Session 7: Linguistic Theories, Cognitive Modeling, and Psycholinguistics (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 12, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 12, Session 7 (15:00-16:30 UTC)
Spotlight Session: Spotlight - Metropolitan West (Spotlight)
Conference Room: Metropolitan West
Conference Time: July 10, 19:00-21:00 (EDT) (America/Toronto)
Global Time: July 10, Spotlight Session (23:00-01:00 UTC)
Keywords:
linguistic theories
TLDR:
Languages are dynamic entities, where the meanings associated with words constantly change with time.
Detecting the semantic variation of words is an important task for various NLP applications that must make time-sensitive predictions.
Existing work on semantic variation prediction have predominant...
You can open the
#paper-P2021
channel in a separate window.
Abstract:
Languages are dynamic entities, where the meanings associated with words constantly change with time.
Detecting the semantic variation of words is an important task for various NLP applications that must make time-sensitive predictions.
Existing work on semantic variation prediction have predominantly focused on comparing some form of an averaged contextualised representation of a target word computed from a given corpus.
However, some of the previously associated meanings of a target word can become obsolete over time (e.g. meaning of gay as happy), while novel usages of existing words are observed (e.g. meaning of cell as a mobile phone).
We argue that mean representations alone cannot accurately capture such semantic variations and propose a method that uses the entire cohort of the contextualised embeddings of the target word, which we refer to as the sibling distribution.
Experimental results on SemEval-2020 Task 1 benchmark dataset for semantic variation prediction show that our method outperforms prior work that consider only the mean embeddings, and is comparable to the current state-of-the-art.
Moreover, a qualitative analysis shows that our method detects important semantic changes in words that are not captured by the existing methods.