Prompt Discriminative Language Models for Domain Adaptation
Keming Lu, Peter Potash, Xihui Lin, Yuwen Sun, Zihan Qian, Zheng Yuan, Tristan Naumann, Tianxi Cai, Junwei Lu
The 5th Workshop on Clinical Natural Language Processing (ClinicalNLP) N/a Paper
TLDR:
Prompt tuning offers an efficient approach to domain adaptation for pretrained language models, which predominantly focus on masked language modeling or generative objectives.
However, the potential of discriminative language models in biomedical tasks remains underexplored.
To bridge this gap, we
You can open the
#paper-ClinicalNLP_50
channel in a separate window.
Abstract:
Prompt tuning offers an efficient approach to domain adaptation for pretrained language models, which predominantly focus on masked language modeling or generative objectives.
However, the potential of discriminative language models in biomedical tasks remains underexplored.
To bridge this gap, we develop BioDLM, a method tailored for biomedical domain adaptation of discriminative language models that incorporates prompt-based continual pretraining and prompt tuning for downstream tasks.
BioDLM aims to maximize the potential of discriminative language models in low-resource scenarios by reformulating these tasks as span-level corruption detection, thereby enhancing performance on domain-specific tasks and improving the efficiency of continual pertaining.
In this way, BioDLM provides a data-efficient domain adaptation method for discriminative language models, effectively enhancing performance on discriminative tasks within the biomedical domain.