Temporal and Second Language Influence on Intra-Annotator Agreement and Stability in Hate Speech Labelling
Gavin Abercrombie, Dirk Hovy, Vinodkumar Prabhakaran
The 17th Linguistic Annotation Workshop (LAW-XVII) \\ @ ACL 2023 Short paper (4 pages) Paper
TLDR:
Much work in natural language processing (NLP) relies on human annotation. The majority of this implicitly assumes that annotator's labels are temporally stable, although the reality is that human judgements are rarely consistent over time. As a subjective annotation task, hate speech labels depend
You can open the
#paper-LAW_26
channel in a separate window.
Abstract:
Much work in natural language processing (NLP) relies on human annotation. The majority of this implicitly assumes that annotator's labels are temporally stable, although the reality is that human judgements are rarely consistent over time. As a subjective annotation task, hate speech labels depend on annotator's emotional and moral reactions to the language used to convey the message. Studies in Cognitive Science reveal a `foreign language effect', whereby people take differing moral positions and perceive offensive phrases to be weaker in their second languages. Does this affect annotations as well? We conduct an experiment to investigate the impacts of (1) time and (2) different language conditions (English and German) on measurements of intra-annotator agreement in a hate speech labelling task. While we do not observe the expected lower stability in the different language condition, we find that overall agreement is significantly lower than is implicitly assumed in annotation tasks, which has important implications for dataset reproducibility in NLP.