资源算法Inferring and Executing Programs for Visual Reasoning

Inferring and Executing Programs for Visual Reasoning

2019-09-12 | |  92 |   0 |   0

|

inferring-and-executing

This is the code for the paper

Inferring and Executing Programs for Visual Reasoning
Justin JohnsonBharath HariharanLaurens van der MaatenJudy HoffmanFei-Fei LiLarry ZitnickRoss Girshick
To appear at ICCV 2017

system.png

If you find this code useful in your research then please cite

@inproceedings{johnson2017inferring,
  title={Inferring and Executing Programs for Visual Reasoning},
  author={Johnson, Justin and Hariharan, Bharath and van der Maaten, Laurens and Hoffman, Judy
          and Fei-Fei, Li and Zitnick, C Lawrence and Girshick, Ross},
  booktitle={ICCV},
  year={2017}
}

Setup

All code was developed and tested on Ubuntu 16.04 with Python 3.5.

You can set up a virtual environment to run the code like this:

virtualenv -p python3 .env       # Create virtual environmentsource .env/bin/activate         # Activate virtual environmentpip install -r requirements.txt  # Install dependenciesecho $PWD > .env/lib/python3.5/site-packages/iep.pth # Add this package to virtual environment# Work for a while ...deactivate # Exit virtual environment

# Pretrained Models You can download and unzip the pretrained models by running `bash scripts/download_pretrained_models.sh`; the models will take about 1.1 GB on disk. We provide two sets of pretrained models: - The models in `models/CLEVR` were trained on the CLEVR dataset; these were used to make Table 1 in the paper. - The models in `models/CLEVR-Humans` were first trained on CLEVR and then finetuned on the CLEVR-Humans dataset; these models were used to make Table 3 in the paper. # Running models You can easily run any of the pretrained models on new images and questions. As an example, we will run several models on the following example image from the CLEVR validation set:

CLEVR_val_000013.png

After downloading the pretrained models, you can use the pretrained model to answer questions about this image with the following command:

python scripts/run_model.py 
  --program_generator models/CLEVR/program_generator_18k.pt 
  --execution_engine models/CLEVR/execution_engine_18k.pt 
  --image img/CLEVR_val_000013.png 
  --question "Does the small sphere have the same color as the cube left of the gray cube?"

This will print the predicted answer, as well as the program that the model used to produce the answer. For the example command we get the output:

Question: "Does the small sphere have the same color as the cube left of the gray cube?"Predicted answer:  yes

Predicted program:
equal_color
query_color
unique
filter_shape[sphere]filter_size[small]scene
query_color
unique
filter_shape[cube]relate[left]unique
filter_shape[cube]filter_color[gray]scene

Training

The procedure for training your own models is described here.

上一篇:pytorch-pose

下一篇:LM-LSTM-CRF

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...