Natural language is ambiguous. Resolving ambiguous questions is key to successfully answering them.
Focusing on questions about images, we create a dataset of ambiguous examples. We annotate these, grouping answers by the underlying question they address and rephrasing the question for each group to reduce ambiguity.
Our analysis reveals a linguistically-aligned ontology of reasons for ambiguity in visual questions.
We then develop an English question-generation model which we demonstrate via automatic and human evaluation produces less ambiguous questions.
We further show that the question generation objective we use allows the model to integrate answer group information without any direct supervision.