Improving Gradient Trade-offs between Tasks in Multi-task Text Classification
Heyan Chai, Jinhao Cui, Ye Wang, Min Zhang, Binxing Fang, Qing Liao
Main: Machine Learning for NLP Main-poster Paper
Session 7: Machine Learning for NLP (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 12, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 12, Session 7 (15:00-16:30 UTC)
Keywords:
multi-task learning
TLDR:
Multi-task learning (MTL) has emerged as a promising approach for sharing inductive bias across multiple tasks to enable more efficient learning in text classification.
However, training all tasks simultaneously often yields degraded performance of each task than learning them independently, since ...
You can open the
#paper-P129
channel in a separate window.
Abstract:
Multi-task learning (MTL) has emerged as a promising approach for sharing inductive bias across multiple tasks to enable more efficient learning in text classification.
However, training all tasks simultaneously often yields degraded performance of each task than learning them independently, since different tasks might conflict with each other.
Existing MTL methods for alleviating this issue is to leverage heuristics or gradient-based algorithm to achieve an arbitrary Pareto optimal trade-off among different tasks.
In this paper, we present a novel gradient trade-off approach to mitigate the task conflict problem, dubbed GetMTL, which can achieve a specific trade-off among different tasks nearby the main objective of multi-task text classification (MTC), so as to improve the performance of each task simultaneously.
The results of extensive experiments on two benchmark datasets back up our theoretical analysis and validate the superiority of our proposed GetMTL.