SMoA: Sparse Mixture of Adapters to Mitigate Multiple Dataset Biases
Yanchen Liu, Jing Yan, Yan Chen, Jing Liu, Hua Wu
The Third Workshop on Trustworthy Natural Language Processing Paper
TLDR:
Recent studies have shown that various biases exist in different NLP tasks, and over-reliance on these biases can result in poor generalization and low adversarial robustness in models. To address this issue, previous research has proposed several debiasing techniques that effectively mitigate speci
You can open the
#paper-TrustNLP_36
channel in a separate window.
Abstract:
Recent studies have shown that various biases exist in different NLP tasks, and over-reliance on these biases can result in poor generalization and low adversarial robustness in models. To address this issue, previous research has proposed several debiasing techniques that effectively mitigate specific biases, but are limited in their ability to address other biases. In this paper, we introduce a novel debiasing method, Sparse Mixture-of-Adapters (SMoA), which can effectively and efficiently mitigate multiple dataset biases. Our experiments on Natural Language Inference and Paraphrase Identification tasks demonstrate that SMoA outperforms both full-finetuning and adapter tuning baselines, as well as prior strong debiasing methods. Further analysis reveals that SMoA is interpretable, with each sub-adapter capable of capturing specific patterns from the training data and specializing in handling specific biases.