SkillQG: Learning to Generate Question for Reading Comprehension Assessment
Xiaoqiang Wang, Bang Liu, Siliang Tang, Lingfei Wu
Findings: Question Answering Findings Paper
Session 4: Question Answering (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 11, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 11, Session 4 (15:00-16:30 UTC)
Keywords:
reading comprehension, question generation
TLDR:
We present SkillQG: a question generation framework with controllable comprehension types for assessing and improving machine reading comprehension models.
Existing question generation systems widely differentiate questions by literal information such as question words and answer types to generate ...
You can open the
#paper-P1985
channel in a separate window.
Abstract:
We present SkillQG: a question generation framework with controllable comprehension types for assessing and improving machine reading comprehension models.
Existing question generation systems widely differentiate questions by literal information such as question words and answer types to generate semantically relevant questions for a given context.
However, they rarely consider the comprehension nature of questions, i.e., the different comprehension capabilities embodied by different questions.
In comparison, our SkillQG is able to tailor a fine-grained assessment and improvement to the capabilities of questions answering models built on it.
Specifically, we first frame the comprehension type of questions based on a hierarchical skill-based schema.
We then formulate SkillQG as a skill-conditioned question generator.
Furthermore, to improve the controllability of generation, we augment the input text with skill-specific question focus and knowledge, which are constructed by iteratively prompting the pre-trained language models.
Empirical results demonstrate that SkillQG outperforms baselines in terms of quality, relevance, and skill-controllability while showing a promising performance boost in downstream question answering task.