Improving Continual Relation Extraction by Distinguishing Analogous Semantics
Wenzheng Zhao, Yuanning Cui, Wei Hu
Main: Information Extraction Main-oral Paper
Session 7: Information Extraction (Oral)
Conference Room: Metropolitan Centre
Conference Time: July 12, 11:00-11:45 (EDT) (America/Toronto)
Global Time: July 12, Session 7 (15:00-15:45 UTC)
Keywords:
named entity recognition and relation extraction
TLDR:
Continual relation extraction (RE) aims to learn constantly emerging relations while avoiding forgetting the learned relations. Existing works store a small number of typical samples to re-train the model for alleviating forgetting. However, repeatedly replaying these samples may cause the overfitti...
You can open the
#paper-P1658
channel in a separate window.
Abstract:
Continual relation extraction (RE) aims to learn constantly emerging relations while avoiding forgetting the learned relations. Existing works store a small number of typical samples to re-train the model for alleviating forgetting. However, repeatedly replaying these samples may cause the overfitting problem. We conduct an empirical study on existing works and observe that their performance is severely affected by analogous relations. To address this issue, we propose a novel continual extraction model for analogous relations. Specifically, we design memory-insensitive relation prototypes and memory augmentation to overcome the overfitting problem. We also introduce integrated training and focal knowledge distillation to enhance the performance on analogous relations. Experimental results show the superiority of our model and demonstrate its effectiveness in distinguishing analogous relations and overcoming overfitting.