This is the code that we wrote to train the state-of-the-art VQA models described in our paper. Our ensemble of 7 models obtained 66.67% on real open-ended test-dev and 70.24% on real multiple-choice test-dev.
Live Demo
You can upload your own images and ask the model your own questions. Try the live demo!
Pretrained Model
We are releasing the “MCB + Genome + Att. + GloVe” model from the paper, which achieves 65.38% on real open-ended test-dev. This is our best individual model.
You can easily use this model with our evaluation code or with our demo server code.
Prerequisites
In order to use our pretrained model:
Compile the feature/20160617_cb_softattention branch of our fork of Caffe. This branch contains Yang Gao’s Compact Bilinear layers (dedicated repo, paper) released under the BDD license, and Ronghang Hu’s Soft Attention layers (paper) released under BSD 2-clause.
Optional: Install spaCy and download GloVe vectors. The latest
stable release of spaCy has a bug that prevents GloVe vectors from
working, so you need to install the HEAD version. See train/README.md.
To generate an answers JSON file in the format expected by the VQA evaluation code and VQA test server, you can use eval/ensemble.py. This code can also ensemble multiple models. Running python ensemble.py will print out a help message telling you what arguments to use.
Demo Server
The code that powers our live demo is in server/. To run this, you’ll need to install Flask and change the constants at the top of server.py. Then, just do python server.py, and the server will bind to 0.0.0.0:5000.
License and Citation
This code and the pretrained model is released under the BSD 2-Clause license. See LICENSE for more information.
@article{fukui16mcb,
title={Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding},
author={Fukui, Akira and Park, Dong Huk and Yang, Daylen and Rohrbach, Anna and Darrell, Trevor and Rohrbach, Marcus},
journal={arXiv:1606.01847},
year={2016},
}