DAMO-NLP at SemEval-2023 Task 2: A Unified Retrieval-augmented System for Multilingual Named Entity Recognition

Zeqi Tan, Shen Huang, Zixia Jia, Jiong Cai, Yinghui Li, Weiming Lu, Yueting Zhuang, Kewei Tu, Pengjun Xie, Fei Huang

The 17th International Workshop on Semantic Evaluation (SemEval-2023) Task 2: multiconer ii multilingual complex named entity recognition Paper

TLDR: The MultiCoNER \textbackslash{}RNum\{2\} shared task aims to tackle multilingual named entity recognition (NER) in fine-grained and noisy scenarios, and it inherits the semantic ambiguity and low-context setting of the MultiCoNER \textbackslash{}RNum\{1\} task. To cope with these problems, the previ
You can open the #paper-SemEval_303 channel in a separate window.
Abstract: The MultiCoNER \textbackslash{}RNum\{2\} shared task aims to tackle multilingual named entity recognition (NER) in fine-grained and noisy scenarios, and it inherits the semantic ambiguity and low-context setting of the MultiCoNER \textbackslash{}RNum\{1\} task. To cope with these problems, the previous top systems in the MultiCoNER \textbackslash{}RNum\{1\} either incorporate the knowledge bases or gazetteers. However, they still suffer from insufficient knowledge, limited context length, single retrieval strategy. In this paper, our team \textbackslash{}textbf\{DAMO-NLP\} proposes a unified retrieval-augmented system (U-RaNER) for fine-grained multilingual NER. We perform error analysis on the previous top systems and reveal that their performance bottleneck lies in insufficient knowledge. Also, we discover that the limited context length causes the retrieval knowledge to be invisible to the model. To enhance the retrieval context, we incorporate the entity-centric Wikidata knowledge base, while utilizing the infusion approach to broaden the contextual scope of the model. Also, we explore various search strategies and refine the quality of retrieval knowledge. Our system\textbackslash{}footnote\{We will release the dataset, code, and scripts of our system at \{\textbackslash{}small \textbackslash{}url\{https://github.com/modelscope/AdaSeq/tree/master/examples/U-RaNER\}\}.\} wins 9 out of 13 tracks in the MultiCoNER \textbackslash{}RNum\{2\} shared task. Additionally, we compared our system with ChatGPT, one of the large language models which have unlocked strong capabilities on many tasks. The results show that there is still much room for improvement for ChatGPT on the extraction task.