Ertim at SemEval-2023 Task 2: Fine-tuning of Transformer Language Models and External Knowledge Leveraging for NER in Farsi, English, French and Chinese
Kevin Deturck, Pierre Magistry, Bénédicte Diot-Parvaz Ahmad, Ilaine Wang, Damien Nouvel, Hugo Lafayette
The 17th International Workshop on Semantic Evaluation (SemEval-2023) Task 2: multiconer ii multilingual complex named entity recognition Paper
TLDR:
Transformer language models are now a solid baseline for Named Entity Recognition and can be significantly improved by leveraging complementary resources, either by integrating external knowledge or by annotating additional data. In a preliminary step, this work presents experiments on fine-tuning t
You can open the
#paper-SemEval_336
channel in a separate window.
Abstract:
Transformer language models are now a solid baseline for Named Entity Recognition and can be significantly improved by leveraging complementary resources, either by integrating external knowledge or by annotating additional data. In a preliminary step, this work presents experiments on fine-tuning transformer models. Then, a set of experiments has been conducted with a Wikipedia-based reclassification system. Additionally, we conducted a small annotation campaign on the Farsi language to evaluate the impact of additional data. These two methods with complementary resources showed improvements compared to fine-tuning only.