Distance from Unimodality: Assessing Polarized Opinions in Abusive Language Detection
John Pavlopoulos, Aristidis Likas
The 7th Workshop on Online Abuse and Harms (WOAH) Non-archival Paper
TLDR:
The ground truth in classification tasks is often approximated by the fraction of annotators who classified an item as belonging to the positive class. Instances for which this fraction is equal to or above 50\textbackslash{}\% are considered positive, including however ones that receive polarized o
You can open the
#paper-ACL_19
channel in a separate window.
Abstract:
The ground truth in classification tasks is often approximated by the fraction of annotators who classified an item as belonging to the positive class. Instances for which this fraction is equal to or above 50\textbackslash{}\% are considered positive, including however ones that receive polarized opinions. This is a problematic encoding convention that disregards the potentially polarized nature of opinions and which is often employed to estimate abusive language. We present the distance from unimodality (DFU), a measure that estimates the extent of polarization on the distribution of opinions and which correlates well with human judgment. By applying DFU to posts crowd-annotated for toxicity, we found that polarized opinions are more likely by annotators originating from different countries. Also, we show that DFU can be exploited as an objective function to train models to predict whether a post will provoke polarized opinions in the future.