T5: Indirectly Supervised Natural Language Processing

Ben Zhou, Dan Roth, Kai-Wei Chang, Muhao Chen, Qiang Ning, Wenpeng Yin

Abstract: This tutorial targets researchers and practitioners who are interested in ML technologies for NLP from indirect supervision. In particular, we will present a diverse thread of indirect supervision studies that try to answer the following questions: (i) when and how can we provide supervision for a target task T, if all we have is data that corresponds to a related task T? (ii) humans do not use exhaustive supervision; they rely on occasional feedback, and learn from incidental signals from various sources; how can we effectively incorporate such supervision in machine learning? (iii) how can we leverage multi-modal supervision to help NLP? To the end, we will discuss several lines of research that address those challenges, including (i) indirect supervision from T' that handles T with outputs spanning from a moderate size to an open space, (ii) the use of sparsely occurring and incidental signals, such as partial labels, noisy labels, knowledge-based constraints, and cross-domain or cross-task annotations—all having statistical associations with the task, (iii) principled ways to measure and understand why these incidental signals can contribute to our target tasks, and (iv) indirect supervision from vision-language signals. We will conclude the tutorial by outlining directions for further investigation.

Time Event Hosts
Sunday, 14:00 T5: Indirectly Supervised Natural Language Processing Ben Zhou, Dan Roth, Kai-Wei Chang, Muhao Chen, Qiang Ning, Wenpeng Yin
Information about the virtual format of this tutorial: