资源算法neural-vqa-tensorflow

neural-vqa-tensorflow

2020-01-13 | |  38 |   0 |   0

Visual Question Answering in Tensorflow

Join the chat at https://gitter.im/neural-vqa-tensorflow/Lobby

This is a Tensorflow implementation of the VIS + LSTM visual question answering model from the paper Exploring Models and Data for Image Question Answeringby Mengye Ren, Ryan Kiros & Richard Zemel. The model architectures vaires slightly from the original - the image embedding is plugged into the last lstm step (after the question) instead of the first. The LSTM model uses the same hyperparameters as those in the Torch implementation of neural-VQA.

687474703a2f2f692e696d6775722e636f6d2f4a7669787832572e6a7067.jpg

Requirements

Datasets

  • Download the MSCOCO train+val images and VQA data using Data/download_data.sh. Extract all the downloaded zip files inside the Data folder.

  • Download the pretrained VGG-16 tensorflow model and save it in the Data folder.

Usage

  • Extract the fc-7 image features using:

python extract_fc7.py --split=train
python extract_fc7.py --split=val
  • Training

    • rnn_size: Size of LSTM internal state. Default is 512.

    • num_lstm_layers: Number of layers in LSTM

    • embedding_size: Size of word embeddings. Default is 512.

    • learning_rate: Learning rate. Default is 0.001.

    • batch_size: Batch size. Default is 200.

    • epochs: Number of full passes through the training data. Default is 50.

    • img_dropout:  Dropout for image embedding nn. Probability of dropping input. Default is 0.5.

    • word_emb_dropout: Dropout for word embeddings. Default is 0.5.

    • data_dir: Directory containing the data h5 files. Default is Data/.

    • Basic usage python train.py

    • Options

  • Prediction

    • python predict.py --image_path="sample_image.jpg" --question="What is the color of the animal shown?" --model_path = "Data/Models/model2.ckpt"

    • Models are saved during training after each of the complete training data in Data/Models. Supply the path of the trained model in model_path option.

  • Evaluation

    • run python evaluate.py with the same options as that in train.py, if not the defaults.

Implementation Details

  • fc7 relu layer features from the pretrained VGG-16 model are used for image embeddings. I did not scale these features, and am not sure if that can make a difference.

  • Questions are zero padded for fixed length questions, so that batch training may be used. Questions are represented as word indices of a question word vocabulary built during pre processing.

  • Answers are mapped to 1000 word vocabulary, covering 87% answers across training and validation datasets.

  • The LSTM+VIS model is defined in vis_lstm.py. The input tensors for training are fc7 features, Questions(Word indices upto 22 words), Answers(one hot encoding vector of size 1000). The model depicted in the figure is implemented with 2 LSTM layers by default(num_layers in configurable).

Results

The model achieved an accuray of 50.8% on the validation dataset after 12 epochs of training across the entire training dataset.

Sample Predictions

The fun part! Try it for yourself. Make sure you have tensorflow installed. Download the data files/trained model from this link and save them in the Data/ directory. Also download the pretrained VGG-16 model and save it as Data/vgg16.tfmodel. You can test for any sample image using:

python predict.py --image_path="Data/sample.jpg" --question="Which animal is this?" --model_path="Data/model2.ckpt"

图片.png

图片.png图片.png

图片.png

References


上一篇:vqa-mcb

下一篇:pytorch-vqa

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...