IRIT_IRIS_A at SemEval-2023 Task 6: Legal Rhetorical Role Labeling Supported by Dynamic-Filled Contextualized Sentence Chunks
Alexandre Gomes de Lima, Jose G. Moreno, Eduardo H. da S. Aranha
The 17th International Workshop on Semantic Evaluation (SemEval-2023) Task 6: legaleval: understanding legal texts Paper
TLDR:
This work presents and evaluates an approach to efficiently leverage the context exploitation ability of pre-trained Transformer models as a way of boosting the performance of models tackling the Legal Rhetorical Role Labeling task. The core idea is to feed the model with sentence chunks that are as
You can open the
#paper-SemEval_141
channel in a separate window.
Abstract:
This work presents and evaluates an approach to efficiently leverage the context exploitation ability of pre-trained Transformer models as a way of boosting the performance of models tackling the Legal Rhetorical Role Labeling task. The core idea is to feed the model with sentence chunks that are assembled in a way that avoids the insertion of padding tokens and the truncation of sentences and, hence, obtain better sentence embeddings. The achieved results show that our proposal is efficient, despite its simplicity, since models based on it overcome strong baselines by 3.76\% in the worst case and by 8.71\% in the best case.