Conformal Nucleus Sampling
Shauli Ravfogel, Yoav Goldberg, Jacob Goldberger
Findings: Interpretability and Analysis of Models for NLP Findings Paper
Session 4: Interpretability and Analysis of Models for NLP (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 11, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 11, Session 4 (15:00-16:30 UTC)
Spotlight Session: Spotlight - Metropolitan West (Spotlight)
Conference Room: Metropolitan West
Conference Time: July 10, 19:00-21:00 (EDT) (America/Toronto)
Global Time: July 10, Spotlight Session (23:00-01:00 UTC)
Keywords:
calibration/uncertainty
TLDR:
Language models generate text based on successively sampling the next word. A decoding procedure based on nucleus (top-$p$) sampling chooses from the smallest possible set of words whose cumulative probability exceeds the probability $p$.
In this work, we assess whether a top-$p$ set is indeed alig...
You can open the
#paper-P2127
channel in a separate window.
Abstract:
Language models generate text based on successively sampling the next word. A decoding procedure based on nucleus (top-$p$) sampling chooses from the smallest possible set of words whose cumulative probability exceeds the probability $p$.
In this work, we assess whether a top-$p$ set is indeed aligned with its probabilistic meaning in various linguistic contexts.
We employ conformal prediction, a calibration procedure that focuses on the construction of minimal prediction sets according to a desired confidence level, to calibrate the parameter $p$ as a function of the entropy of the next word distribution. We find that OPT models are overconfident, and that calibration shows a moderate inverse scaling with model size.