Model-tuning Via Prompts Makes NLP Models Adversarially Robust
Mrigank Raman, Pratyush Maini, Zico Kolter, Zachary C. Lipton, Danish Pruthi
The Third Workshop on Trustworthy Natural Language Processing Paper
TLDR:
In recent years,NLP practitioners have converged on the following practice:(i) import an off-the-shelf pretrained (masked) language model;(ii) append a multilayer perceptron atop the CLS token's hidden representation(with randomly initialized weights);and (iii) fine-tune the entire model on a downst
You can open the
#paper-TrustNLP_57
channel in a separate window.
Abstract:
In recent years,NLP practitioners have converged on the following practice:(i) import an off-the-shelf pretrained (masked) language model;(ii) append a multilayer perceptron atop the CLS token's hidden representation(with randomly initialized weights);and (iii) fine-tune the entire model on a downstream task (\textbackslash{}linearFTns).This procedure has produced massive gains on standard NLP benchmarks, but these models remain brittle, even to mild adversarial perturbations, such as word-level synonym substitutions. In this work, we demonstrate surprising gains in adversarial robustness enjoyed by Model-tuning Via Prompts (MVP),an alternative method of adapting to downstream tasks. Rather than modifying the model (by appending an MLP head), MVP instead modifies the input (by appending a prompt template). Across three classification datasets, MVP improves performance against adversarial word-level synonym substitutions by an average of 8\% over standard methods and even outperforms adversarial training-based state-of-art defenses by 3.5\%. By combining MVP with adversarial training, we achieve further improvements in robust accuracy while maintaining clean accuracy. Finally, we conduct ablations to investigate the mechanism underlying these gains. Notably, we find that the main causes of vulnerability of MLP can be attributed to the misalignment between pre-training and fine-tuning tasks, and the randomly initialized MLP parameters.