Linear Guardedness and its Implications
Shauli Ravfogel, Yoav Goldberg, Ryan Cotterell
Main: Machine Learning for NLP Main-poster Paper
Poster Session 6: Machine Learning for NLP (Poster)
Conference Room: Frontenac Ballroom and Queen's Quay
Conference Time: July 12, 09:00-10:30 (EDT) (America/Toronto)
Global Time: July 12, Poster Session 6 (13:00-14:30 UTC)
Keywords:
representation learning
TLDR:
Methods for erasing human-interpretable concepts from neural representations that assume linearity have been found to be tractable and useful.
However, the impact of this removal on the behavior of downstream classifiers trained on the modified representations is not fully understood.
In this work, ...
You can open the
#paper-P2129
channel in a separate window.
Abstract:
Methods for erasing human-interpretable concepts from neural representations that assume linearity have been found to be tractable and useful.
However, the impact of this removal on the behavior of downstream classifiers trained on the modified representations is not fully understood.
In this work, we formally define the notion of linear guardedness as the inability of an adversary to predict the concept directly from the representation, and study its implications.
We show that, in the binary case, under certain assumptions, a downstream log-linear model cannot recover the erased concept.
However, we constructively demonstrate that a multiclass log-linear model \emph{can} be constructed that indirectly recovers the concept in some cases, pointing to the inherent limitations of linear guardedness as a downstream bias mitigation technique.
These findings shed light on the theoretical limitations of linear erasure methods and highlight the need for further research on the connections between intrinsic and extrinsic bias in neural models.