bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark
Momchil Hardalov, Pepa Atanasova, Todor Mihaylov, Galia Angelova, Kiril Simov, Petya Osenova, Veselin Stoyanov, Ivan K. Koychev, Preslav Nakov, Dragomir Radev
Main: Resources and Evaluation Main-poster Paper
Poster Session 6: Resources and Evaluation (Poster)
Conference Room: Frontenac Ballroom and Queen's Quay
Conference Time: July 12, 09:00-10:30 (EDT) (America/Toronto)
Global Time: July 12, Poster Session 6 (13:00-14:30 UTC)
Keywords:
benchmarking, language resources, multilingual corpora, nlp datasets
Languages:
bulgarian
TLDR:
We present bgGLUE (Bulgarian General Language Understanding Evaluation), a benchmark for evaluating language models on Natural Language Understanding (NLU) tasks in Bulgarian.
Our benchmark includes NLU tasks targeting a variety of NLP problems (e.g., natural language inference, fact-checking, name...
You can open the
#paper-P1205
channel in a separate window.
Abstract:
We present bgGLUE (Bulgarian General Language Understanding Evaluation), a benchmark for evaluating language models on Natural Language Understanding (NLU) tasks in Bulgarian.
Our benchmark includes NLU tasks targeting a variety of NLP problems (e.g., natural language inference, fact-checking, named entity recognition, sentiment analysis, question answering, etc.) and machine learning tasks (sequence labeling, document-level classification, and regression). We run the first systematic evaluation of pre-trained language models for Bulgarian, comparing and contrasting results across the nine tasks in the benchmark. The evaluation results show strong performance on sequence labeling tasks, but there is a lot of room for improvement for tasks that require more complex reasoning. We make bgGLUE publicly available together with the fine-tuning and the evaluation code, as well as a public leaderboard at https://bgglue.github.io, and we hope that it will enable further advancements in developing NLU models for Bulgarian.