Differentiable Instruction Optimization for Cross-Task Generalization
Masaru Isonuma, Junichiro Mori, Ichiro Sakata
Findings: Generation Findings Paper
Session 7: Generation (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 12, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 12, Session 7 (15:00-16:30 UTC)
Spotlight Session: Spotlight - Metropolitan Centre (Spotlight)
Conference Room: Metropolitan Centre
Conference Time: July 10, 19:00-21:00 (EDT) (America/Toronto)
Global Time: July 10, Spotlight Session (23:00-01:00 UTC)
Keywords:
few-shot generation, text-to-text generation
TLDR:
Instruction tuning has been attracting much attention to achieve generalization ability across a wide variety of tasks.
Although various types of instructions have been manually created for instruction tuning, it is still unclear what kind of instruction is optimal to obtain cross-task generalizatio...
You can open the
#paper-P1538
channel in a separate window.
Abstract:
Instruction tuning has been attracting much attention to achieve generalization ability across a wide variety of tasks.
Although various types of instructions have been manually created for instruction tuning, it is still unclear what kind of instruction is optimal to obtain cross-task generalization ability.
This work presents instruction optimization, which optimizes training instructions with respect to generalization ability.
Rather than manually tuning instructions, we introduce learnable instructions and optimize them with gradient descent by leveraging bilevel optimization.
Experimental results show that the learned instruction enhances the diversity of instructions and improves the generalization ability compared to using only manually created instructions.