Abstract
The study of algorithms to automatically answer visual
questions currently is motivated by visual question answering (VQA) datasets constructed in artificial VQA settings.
We propose VizWiz, the first goal-oriented VQA dataset
arising from a natural VQA setting. VizWiz consists of over
31,000 visual questions originating from blind people who
each took a picture using a mobile phone and recorded a
spoken question about it, together with 10 crowdsourced
answers per visual question. VizWiz differs from the many
existing VQA datasets because (1) images are captured by
blind photographers and so are often poor quality, (2) questions are spoken and so are more conversational, and (3)
often visual questions cannot be answered. Evaluation of
modern algorithms for answering visual questions and deciding if a visual question is answerable reveals that VizWiz
is a challenging dataset. We introduce this dataset to encourage a larger community to develop more generalized
algorithms that can assist blind people