John-Arthur at SemEval-2023 Task 4: Fine-Tuning Large Language Models for Arguments Classification
Georgios Balikas
The 17th International Workshop on Semantic Evaluation (SemEval-2023) Task 4: valueeval: identification of human values behind arguments Paper
TLDR:
This paper presents the system submissions of the John-Arthur team to the SemEval Task 4 ``ValueEval: Identification of Human Values behind Arguments''. The best system of the team was ranked 3rd and the overall rank of the team was 2nd (the first team had the two best systems). John-Arthur team mod
You can open the
#paper-SemEval_215
channel in a separate window.
Abstract:
This paper presents the system submissions of the John-Arthur team to the SemEval Task 4 ``ValueEval: Identification of Human Values behind Arguments''. The best system of the team was ranked 3rd and the overall rank of the team was 2nd (the first team had the two best systems). John-Arthur team models the ValueEval problem as a multi-class, multi-label text classification problem. The solutions leverage recently proposed large language models that are fine-tuned on the provided datasets. To boost the achieved performance we employ different best practises whose impact on the model performance we evaluate here. The code ispublicly available at github and the model onHuggingface hub.