MVP-Tuning: Multi-View Knowledge Retrieval with Prompt Tuning for Commonsense Reasoning
Yongfeng Huang, Yanyang Li, Yichong Xu, Lin Zhang, ruyi gan, Jiaxing Zhang, Liwei Wang
Main: Question Answering Main-poster Paper
Session 7: Question Answering (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 12, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 12, Session 7 (15:00-16:30 UTC)
Keywords:
commonsense qa, knowledge base qa, open-domain qa
TLDR:
Recent advances in pre-trained language models (PLMs) have facilitated the development of
commonsense reasoning tasks. However, existing methods rely on multi-hop knowledge
retrieval and thus suffer low accuracy due to
embedded noise in the acquired knowledge.
In addition, these methods often attain...
You can open the
#paper-P1925
channel in a separate window.
Abstract:
Recent advances in pre-trained language models (PLMs) have facilitated the development of
commonsense reasoning tasks. However, existing methods rely on multi-hop knowledge
retrieval and thus suffer low accuracy due to
embedded noise in the acquired knowledge.
In addition, these methods often attain high
computational costs and nontrivial knowledge
loss because they encode the knowledge independently of the PLM, making it less relevant to the task and thus resulting in a poor
local optimum. In this work, we propose MultiView Knowledge Retrieval with Prompt Tuning (MVP-Tuning). MVP-Tuning leverages
similar question-answer pairs in the training set
to improve knowledge retrieval and employs
a single prompt-tuned PLM to model knowledge and input text jointly. We conduct our experiments on five commonsense reasoning QA
benchmarks to show that MVP-Tuning outperforms all other baselines in 4 out of 5 datasets
with less than 2\% trainable parameters. MVPTuning even gets a new state-of-the-art result
on OpenBookQA and is number one on the
leaderboard.