The paper shows that with a relatively simple model, using only
common building blocks in Deep Learning, you can get better accuracies
than the majority of previously published work on the popular VQA v1 dataset.
This repository is intended to provide a straightforward
implementation of the paper for other researchers to build on.
The results closely match the reported results, as the majority of
details should be exactly the same as the paper. (Thanks to the authors
for answering my questions about some details!)
This implementation seems to consistently converge to about 0.1% better
results –
there are two main implementation differences:
Instead of setting a limit on the maximum number of words per
question and cutting off all words beyond this limit, this code uses
per-example dynamic unrolling of the language model.
An issue with the official evaluation code
makes some questions unanswerable. This code does not normalize
machine-given answers, which avoids this problem. As the vast majority
of questions are not affected by this issue, it's very unlikely that
this will have any significant impact on accuracy.
qa_path should contain the files OpenEnded_mscoco_train2014_questions.json, OpenEnded_mscoco_val2014_questions.json, mscoco_train2014_annotations.json, mscoco_val2014_annotations.json.
train_path, val_path, test_path should contain the train, validation, and test .jpg images respectively.
Pre-process images (93 GiB of free disk space required for f16 accuracy) with ResNet152 weights ported from Caffe and vocabularies for questions and answers with:
This will alternate between one epoch of training on the train split
and one epoch of validation on the validation split while printing the
current training progress to stdout and saving logs in the logs directory.
The logs contain the name of the model, training statistics, contents of config.py, model weights, evaluation information (per-question answer and accuracy), and question and answer vocabularies.
During training (which takes a while), plot the training progress with: