Using Bottleneck Adapters to Identify Cancer in Clinical Notes under Low-Resource Constraints

Omid Rohanian, Hannah Jauncey, Mohammadmahdi Nouriborji, Vinod Kumar, Bronner P. Gonalves, Christiana Kartsonaki, ISARIC Clinical Characterisation Group, Laura Merson, David Clifton

BioNLP and BioNLP-ST 2023 Long paper Paper

TLDR: Processing information locked within clinical health records is a challenging task that remains an active area of research in biomedical NLP. In this work, we evaluate a broad set of machine learning techniques ranging from simple RNNs to specialised transformers such as BioBERT on a dataset contai
You can open the #paper-BioNLP_9 channel in a separate window.
Abstract: Processing information locked within clinical health records is a challenging task that remains an active area of research in biomedical NLP. In this work, we evaluate a broad set of machine learning techniques ranging from simple RNNs to specialised transformers such as BioBERT on a dataset containing clinical notes along with a set of annotations indicating whether a sample is cancer-related or not. Furthermore, we specifically employ efficient fine-tuning methods from NLP, namely, bottleneck adapters and prompt tuning, to adapt the models to our specialised task. Our evaluations suggest that fine-tuning a frozen BERT model pre-trained on natural language and with bottleneck adapters outperforms all other strategies, including full fine-tuning of the specialised BioBERT model. Based on our findings, we suggest that using bottleneck adapters in low-resource situations with limited access to labelled data or processing capacity could be a viable strategy in biomedical text mining.