Lon-eå at SemEval-2023 Task 11: A Comparison of Activation Functions for Soft and Hard Label Prediction

Peyman Hosseini, Mehran Hosseini, Sana Al-azzawi, Marcus Liwicki, Ignacio Castro, Matthew Purver

The 17th International Workshop on Semantic Evaluation (SemEval-2023) Task 11: learning with disagreements (le-wi-di) Paper

TLDR: We study the influence of different activation functions in the output layer of pre-trained transformer models for soft and hard label prediction in the learning with disagreement task. In this task, the goal is to quantify the amount of disagreement via predicting soft labels. To predict the soft l
You can open the #paper-SemEval_203 channel in a separate window.
Abstract: We study the influence of different activation functions in the output layer of pre-trained transformer models for soft and hard label prediction in the learning with disagreement task. In this task, the goal is to quantify the amount of disagreement via predicting soft labels. To predict the soft labels, we use BERT-based preprocessors and encoders and vary the activation function used in the output layer, while keeping other parameters constant. The soft labels are then used for the hard label prediction. The activation functions considered are sigmoid as well as a step-function that is added to the model post-training and a sinusoidal activation function, which is introduced for the first time in this paper.