Few-shot Event Detection: An Empirical Study and a Unified View
Yubo Ma, Zehao Wang, Yixin Cao, Aixin Sun
Main: Information Extraction Main-poster Paper
Poster Session 1: Information Extraction (Poster)
Conference Room: Frontenac Ballroom and Queen's Quay
Conference Time: July 10, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 10, Poster Session 1 (15:00-16:30 UTC)
Keywords:
event extraction
TLDR:
Few-shot event detection (ED) has been widely studied, while this brings noticeable discrepancies, e.g., various motivations, tasks, and experimental settings, that hinder the understanding of models for future progress.This paper presents a thorough empirical study, a unified view of ED models, and...
You can open the
#paper-P5707
channel in a separate window.
Abstract:
Few-shot event detection (ED) has been widely studied, while this brings noticeable discrepancies, e.g., various motivations, tasks, and experimental settings, that hinder the understanding of models for future progress.This paper presents a thorough empirical study, a unified view of ED models, and a better unified baseline. For fair evaluation, we compare 12 representative methods on three datasets, which are roughly grouped into prompt-based and prototype-based models for detailed analysis. Experiments consistently demonstrate that prompt-based methods, including ChatGPT, still significantly trail prototype-based methods in terms of overall performance. To investigate their superior performance, we break down their design elements along several dimensions and build a unified framework on prototype-based methods. Under such unified view, each prototype-method can be viewed a combination of different modules from these design elements. We further combine all advantageous modules and propose a simple yet effective baseline, which outperforms existing methods by a large margin (e.g., 2.7\% F1 gains under low-resource setting).