[Demo] Finspector: A Human-Centered Visual Inspection Tool for Exploring and Comparing Biases among Foundation Models

Bum Chul Kwon, Nandana Mihindukulasooriya

Demo: Large Language Models (demo) Demo Paper

Demo Session 7: Large Language Models (demo) (Poster)
Conference Room: Frontenac Ballroom and Queen's Quay
Conference Time: July 12, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 12, Demo Session 7 (15:00-16:30 UTC)
TLDR: Pre-trained transformer-based language models are becoming increasingly popular due to their exceptional performance on various benchmarks. However, concerns persist regarding the presence of hidden biases within these models, which can lead to discriminatory outcomes and reinforce harmful stereotyp...
You can open the #paper-D18 channel in a separate window.
Abstract: Pre-trained transformer-based language models are becoming increasingly popular due to their exceptional performance on various benchmarks. However, concerns persist regarding the presence of hidden biases within these models, which can lead to discriminatory outcomes and reinforce harmful stereotypes. To address this issue, we propose Finspector, a human-centered visual inspection tool designed to detect biases in different categories through log-likelihood scores generated by language models. The goal of the tool is to enable researchers to easily identify potential biases using visual analytics, ultimately contributing to a fairer and more just deployment of these models in both academic and industrial settings. Finspector is available at https://github.com/IBM/finspector.