The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks
Nikil Roashan Selvam, Sunipa Dev, Daniel Khashabi, Tushar Khot, Kai-Wei Chang
Main: Ethics and NLP Main-poster Paper
Poster Session 2: Ethics and NLP (Poster)
Conference Room: Frontenac Ballroom and Queen's Quay
Conference Time: July 10, 14:00-15:30 (EDT) (America/Toronto)
Global Time: July 10, Poster Session 2 (18:00-19:30 UTC)
Keywords:
model bias/fairness evaluation, reflections and critiques
TLDR:
How reliably can we trust the scores obtained from social bias benchmarks as faithful indicators of problematic social biases in a given model?
In this work, we study this question by contrasting social biases with non-social biases that stem from choices made during dataset construction (which mi...
You can open the
#paper-P4056
channel in a separate window.
Abstract:
How reliably can we trust the scores obtained from social bias benchmarks as faithful indicators of problematic social biases in a given model?
In this work, we study this question by contrasting social biases with non-social biases that stem from choices made during dataset construction (which might not even be discernible to the human eye). To do so, we empirically simulate various alternative constructions for a given benchmark based on seemingly innocuous modifications (such as paraphrasing or random-sampling) that maintain the essence of their social bias. On two well-known social bias benchmarks (Winogender and BiasNLI), we observe that these shallow modifications have a surprising effect on the resulting degree of bias across various models and consequently the relative ordering of these models when ranked by measured bias.
We hope these troubling observations motivate more robust measures of social biases.