[Demo] Inseq: An Interpretability Toolkit for Sequence Generation Models
Gabriele Sarti, Nils Feldhus, Ludwig Sickert, Oskar van der Wal
Demo: Interpretability and Analysis of Models for NLP (demo) Demo Paper
Demo Session 5: Interpretability and Analysis of Models for NLP (demo) (Poster)
Conference Room: Frontenac Ballroom and Queen's Quay
Conference Time: July 11, 16:15-17:45 (EDT) (America/Toronto)
Global Time: July 11, Demo Session 5 (20:15-21:45 UTC)
TLDR:
Past work in natural language processing interpretability focused mainly on popular classification tasks while largely overlooking generation settings, partly due to a lack of dedicated tools. In this work, we introduce Inseq, a Python library to democratize access to interpretability analyses of se...
You can open the
#paper-D104
channel in a separate window.
Abstract:
Past work in natural language processing interpretability focused mainly on popular classification tasks while largely overlooking generation settings, partly due to a lack of dedicated tools. In this work, we introduce Inseq, a Python library to democratize access to interpretability analyses of sequence generation models. Inseq enables intuitive and optimized extraction of models' internal information and feature importance scores for popular decoder-only and encoder-decoder Transformers architectures. We showcase its potential by adopting it to highlight gender biases in machine translation models and locate factual knowledge inside GPT-2. Thanks to its extensible interface supporting cutting-edge techniques such as contrastive feature attribution, Inseq can drive future advances in explainable natural language generation, centralizing good practices and enabling fair and reproducible model evaluations.