PhotoBook is a collaborative dialogue game where two players receive private, partially-overlapping sets of images and resolve which images they have in common.
It presents machines with a great challenge to learn how people build common ground around multimodal context to communicate effectively.
Methods developed in the literature, however, cannot be deployed to real gameplay
since they only tackle some subtasks of the game,
and they require additional reference chains inputs, whose extraction process is imperfect.
Therefore, we propose a reference chain-free listener model
that directly addresses the game's predictive task, i.e., deciding whether an image is shared with partner.
Our DeBERTa-based listener model reads the full dialogue, and utilizes
CLIPScore features to assess utterance-image relevance.
We achieve >77\% accuracy on unseen sets of images/game themes, outperforming baseline by >17 points.