MIL-Decoding: Detoxifying Language Models at Token-Level via Multiple Instance Learning
Xu Zhang, Xiaojun Wan
Main: Ethics and NLP Main-poster Paper
Session 4: Ethics and NLP (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 11, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 11, Session 4 (15:00-16:30 UTC)
Keywords:
model bias/unfairness mitigation
TLDR:
Despite advances in large pre-trained neural language models, they are prone to generating toxic language, which brings security risks to their applications.
We introduce MIL-Decoding, which detoxifies language models at token-level by interpolating it with a trained multiple instance learning (MIL)...
You can open the
#paper-P4914
channel in a separate window.
Abstract:
Despite advances in large pre-trained neural language models, they are prone to generating toxic language, which brings security risks to their applications.
We introduce MIL-Decoding, which detoxifies language models at token-level by interpolating it with a trained multiple instance learning (MIL) network.
MIL model is trained on a corpus with a toxicity label for each text to predict the overall toxicity and the toxicity of each token in its context.
Intuitively, the MIL network computes a toxicity distribution over next tokens according to the generated context which supplements the original language model to avoid toxicity.
We evaluate MIL-Decoding with automatic metrics and human evaluation, where MIL-Decoding outperforms other baselines in detoxification while it only hurts generation fluency a little bit.