Mind the Gap between the Application Track and the Real World
Ananya Ganesh, Jie Cao, E. Margaret Perkoff, Rosy Southwell, Martha Palmer, Katharina Kann
Main: Theme: Reality Check Main-oral Paper
Session 2: Theme: Reality Check (Oral)
Conference Room: Metropolitan East
Conference Time: July 10, 14:00-15:30 (EDT) (America/Toronto)
Global Time: July 10, Session 2 (18:00-19:30 UTC)
Keywords:
evaluation
TLDR:
Recent advances in NLP have led to a rise in inter-disciplinary and application-oriented research. While this demonstrates the growing real-world impact of the field, research papers frequently feature experiments that do not account for the complexities of realistic data and environments.
To ex...
You can open the
#paper-P5113
channel in a separate window.
Abstract:
Recent advances in NLP have led to a rise in inter-disciplinary and application-oriented research. While this demonstrates the growing real-world impact of the field, research papers frequently feature experiments that do not account for the complexities of realistic data and environments.
To explore the extent of this gap, we investigate the relationship between the real-world motivations described in NLP papers and the models and evaluation which comprise the proposed solution.
We first survey papers from the NLP Applications track from ACL 2020 and EMNLP 2020, asking which papers have differences between their stated motivation and their experimental setting, and if so, mention them. We find that many papers fall short of considering real-world input and output conditions due to adopting simplified modeling or evaluation settings.
As a case study, we then empirically show that the performance of an educational dialog understanding system deteriorates when used in a realistic classroom environment.