Rehearsal-free Continual Language Learning via Efficient Parameter Isolation
Zhicheng Wang, Yufang Liu, Tao Ji, xiaoling Wang, Yuanbin Wu, congcong jiang, ye chao, zhencong han, ling wang, xu shao, wenqiu zeng
Main: Large Language Models Main-poster Paper
Session 4: Large Language Models (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 11, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 11, Session 4 (15:00-16:30 UTC)
Keywords:
continual learning
TLDR:
We study the problem of defying catastrophic forgetting when learning a series of language processing tasks. Compared with previous methods, we emphasize the importance of not caching history tasks' data, which makes the problem more challenging. Our proposed method applies the parameter isolation s...
You can open the
#paper-P5141
channel in a separate window.
Abstract:
We study the problem of defying catastrophic forgetting when learning a series of language processing tasks. Compared with previous methods, we emphasize the importance of not caching history tasks' data, which makes the problem more challenging. Our proposed method applies the parameter isolation strategy. For each task, it allocates a small portion of private parameters and learns them with a shared pre-trained model. To load correct parameters at testing time, we introduce a simple yet effective non-parametric method. Experiments on continual language learning benchmarks show that our method is significantly better than all existing no-data-cache methods, and is comparable (or even better) than those using historical data.