Improving Gender Fairness of Pre-Trained Language Models without Catastrophic Forgetting
Zahra Fatemi, Chen Xing, Wenhao Liu, Caimming Xiong
Main: Ethics and NLP Main-poster Paper
    Poster Session 4: Ethics and NLP (Poster)
    
Conference Room: Frontenac Ballroom and Queen's Quay 
    Conference Time: July 11, 11:00-12:30 (EDT) (America/Toronto)
    Global Time: July 11, Poster Session 4 (15:00-16:30 UTC)
    
    
  
          Keywords:
          model bias/unfairness mitigation
        
        
        
        
          TLDR:
          Existing studies addressing gender bias of pre-trained language models, usually build a small gender-neutral data set and conduct a second phase pre-training on the  model with such data. However, given the limited size and concentrated focus of the gender-neutral data, catastrophic forgetting would...
        
  
    You can open the
    #paper-P5583
    channel in a separate window.
  
  
    
            Abstract:
            Existing studies addressing gender bias of pre-trained language models, usually build a small gender-neutral data set and conduct a second phase pre-training on the  model with such data. However, given the limited size and concentrated focus of the gender-neutral data, catastrophic forgetting would occur during second-phase pre-training. Forgetting information in the original training data may damage the model's downstream performance by a large margin. In this work, we empirically show that catastrophic forgetting occurs in such methods by evaluating them with general NLP tasks in GLUE. Then, we propose a new method, GEnder Equality Prompt (GEEP), to improve gender fairness of pre-trained models with less forgetting. GEEP freezes the pre-trained model and learns gender-related prompts with gender-neutral data.Empirical results show that GEEP not only achieves SOTA performances on gender fairness tasks, but also forgets less and performs better on GLUE by a large margin.
          
         Anthology
 Anthology
       Underline
 Underline