Feature Interactions Reveal Linguistic Structure in Language Models
Jaap Jumelet, Willem Zuidema
Findings: Interpretability and Analysis of Models for NLP Findings Paper
Session 4: Interpretability and Analysis of Models for NLP (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 11, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 11, Session 4 (15:00-16:30 UTC)
Keywords:
explanation faithfulness
TLDR:
We study feature interactions in the context of feature attribution methods for post-hoc interpretability.
In interpretability research, getting to grips with feature interactions is increasingly recognised as an important challenge, because interacting features are key to the success of neural net...
You can open the
#paper-P5382
channel in a separate window.
Abstract:
We study feature interactions in the context of feature attribution methods for post-hoc interpretability.
In interpretability research, getting to grips with feature interactions is increasingly recognised as an important challenge, because interacting features are key to the success of neural networks. Feature interactions allow a model to build up hierarchical representations for its input, and might provide an ideal starting point for the investigation into linguistic structure in language models.
However, uncovering the exact role that these interactions play is also difficult, and a diverse range of interaction attribution methods has been proposed.
In this paper, we focus on the question which of these methods most faithfully reflects the inner workings of the target models.
We work out a grey box methodology, in which we train models to perfection on a formal language classification task, using PCFGs.
We show that under specific configurations, some methods are indeed able to uncover the grammatical rules acquired by a model.
Based on these findings we extend our evaluation to a case study on language models, providing novel insights into the linguistic structure that these models have acquired.