Cost-effective Distillation of Large Language Models
Sayantan Dasgupta, Trevor Cohn, Timothy Baldwin
Findings: Machine Learning for NLP Findings Paper
Session 7: Machine Learning for NLP (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 12, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 12, Session 7 (15:00-16:30 UTC)
Keywords:
model compression methods
TLDR:
Knowledge distillation (KD) involves training a small "student'' model to replicate the strong performance of a high-capacity "teacher'' model, enabling efficient deployment in resource-constrained settings. Top-performing methods tend to be task- or architecture-specific and lack generalizability. ...
You can open the
#paper-P2609
channel in a separate window.
Abstract:
Knowledge distillation (KD) involves training a small "student'' model to replicate the strong performance of a high-capacity "teacher'' model, enabling efficient deployment in resource-constrained settings. Top-performing methods tend to be task- or architecture-specific and lack generalizability. Several existing approaches require pretraining of the teacher on task-specific datasets, which can be costly for large and unstable for small datasets. Here we propose an approach for improving KD through a novel distillation loss agnostic to the task and model architecture. We successfully apply our method to the distillation of the BERT-base and achieve highly competitive results from the distilled student across a range of GLUE tasks, especially for tasks with smaller datasets.