Consistent Prototype Learning for Few-Shot Continual Relation Extraction
Xiudi Chen, Hui Wu, xiaodong shi
Main: Information Extraction Main-oral Paper
Session 7: Information Extraction (Oral)
Conference Room: Metropolitan Centre
Conference Time: July 12, 11:00-11:45 (EDT) (America/Toronto)
Global Time: July 12, Session 7 (15:00-15:45 UTC)
Keywords:
named entity recognition and relation extraction, open information extraction, zero/few-shot extraction
TLDR:
Few-shot continual relation extraction aims to continually train a model on incrementally few-shot data to learn new relations while avoiding forgetting old ones. However, current memory-based methods are prone to overfitting memory samples, resulting in insufficient activation of old relations and ...
You can open the
#paper-P5043
channel in a separate window.
Abstract:
Few-shot continual relation extraction aims to continually train a model on incrementally few-shot data to learn new relations while avoiding forgetting old ones. However, current memory-based methods are prone to overfitting memory samples, resulting in insufficient activation of old relations and limited ability to handle the confusion of similar classes. In this paper, we design a new N-way-K-shot Continual Relation Extraction (NK-CRE) task and propose a novel few-shot continual relation extraction method with Consistent Prototype Learning (ConPL) to address the aforementioned issues. Our proposed ConPL is mainly composed of three modules: 1) a prototype-based classification module that provides primary relation predictions under few-shot continual learning; 2) a memory-enhanced module designed to select vital samples and refined prototypical representations as a novel multi-information episodic memory; 3) a consistent learning module to reduce catastrophic forgetting by enforcing distribution consistency. To effectively mitigate catastrophic forgetting, ConPL ensures that the samples and prototypes in the episodic memory remain consistent in terms of classification and distribution. Additionally, ConPL uses prompt learning to extract better representations and adopts a focal loss to alleviate the confusion of similar classes. Experimental results on two commonly-used datasets show that our model consistently outperforms other competitive baselines.