Reproducibility in NLP: What Have We Learned from the Checklist?
Ian Magnusson, Noah A. Smith, Jesse Dodge
Findings: Theme: Reality Check Findings Paper
Session 4: Theme: Reality Check (Virtual Poster)
Conference Room: Pier 7&8
Conference Time: July 11, 11:00-12:30 (EDT) (America/Toronto)
Global Time: July 11, Session 4 (15:00-16:30 UTC)
Spotlight Session: Spotlight - Metropolitan West (Spotlight)
Conference Room: Metropolitan West
Conference Time: July 10, 19:00-21:00 (EDT) (America/Toronto)
Global Time: July 10, Spotlight Session (23:00-01:00 UTC)
Keywords:
(non-)reproducibility
TLDR:
Scientific progress in NLP rests on the reproducibility of researchers' claims.
The *CL conferences created the NLP Reproducibility Checklist in 2020 to be completed by authors at submission to remind them of key information to include. We provide the first analysis of the Checklist by examining 10,...
You can open the
#paper-P4619
channel in a separate window.
Abstract:
Scientific progress in NLP rests on the reproducibility of researchers' claims.
The *CL conferences created the NLP Reproducibility Checklist in 2020 to be completed by authors at submission to remind them of key information to include. We provide the first analysis of the Checklist by examining 10,405 anonymous responses to it. First, we find evidence of an increase in reporting of information on efficiency, validation performance, summary statistics, and hyperparameters after the Checklist's introduction. Further, we show acceptance rate grows for submissions with more Yes responses. We find that the 44\% of submissions that gather new data are 5\% less likely to be accepted than those that did not; the average reviewer-rated reproducibility of these submissions is also 2\% lower relative to the rest. We find that only 46\% of submissions claim to open-source their code, though submissions that do have 8\% higher reproducibility score relative to those that do not, the most for any item. We discuss what can be inferred about the state of reproducibility in NLP, and provide a set of recommendations for future conferences, including: a) allowing submitting code and appendices one week after the deadline, and b) measuring dataset reproducibility by a checklist of data collection practices.