资源算法 nips16_PTN

nips16_PTN

2020-01-10 | |  34 |   0 |   0

Perspective Transformer Nets (PTN)

This is the code for NIPS 2016 paper Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision by Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo and Honglak Lee

68747470733a2f2f62313931633061372d612d36326362336131612d732d73697465732e676f6f676c6567726f7570732e636f6d2f736974652f736b7977616c6b65727978632f70657273706563746976655f7472616e73666f726d65725f6e6574732f776562736974655f6261636b67726f756e642e706e67.png

Please follow the instructions to run the code.

Requirements

PTN requires or works with

  • Mac OS X or Linux

  • NVIDIA GPU

Installing Dependency

The following command installs the Perspective Transformer Layer:

./install_ptnbhwd.sh

Dataset Downloading

  • Please run the command to download the pre-processed dataset (including rendered 2D views and 3D volumes):

./prepare_data.sh

Pre-trained Models Downloading (single-class experiment)

PTN-Proj: ptn_proj.t7

PTN-Comb: ptn_comb.t7

CNN-Vol: cnn_vol.t7

  • The following command downloads the pre-trained models:

./download_models.sh

Testing using Pre-trained Models (single-class experiment)

  • The following command evaluates the pre-trained models:

./eval_models.sh

Training (single-class experiment)

  • If you want to pre-train the view-point indepedent image encoder on single-class, please run the following command. Note that the pre-training could take a few days on a single TITAN X GPU.

./demo_pretrain_singleclass.sh
  • If you want to train PTN-Proj (unsupervised) on single-class based on pre-trained encoder, please run the command.

./demo_train_ptn_proj_singleclass.sh
  • If you want to train PTN-Comb (3D supervision) on single-class based on pre-trained encoder, please run the command.

./demo_train_ptn_comb_singleclass.sh
  • If you want to train CNN-Vol (3D supervision) on single-class based on pre-trained encoder, please run the command.

./demo_train_cnn_vol_singleclass.sh

Using your own camera

  • In many cases, you want to implement your own camera matrix (e.g., intrinsic or extrinsic). Please feel free to modify this function.

  • Before start your own implementation, we recommand to go through some basic camera geometry in this computer vision textbook written by Richard Szeliski (see Eq 2.59 at Page 53).

  • Note that in our voxel ray-tracing implementation, we used the inverse camera matrix.

Third-party Implementation

Besides our torch implementation, we recommend to see also the following third-party re-implementation:

  • TensorFlow Implementation: This re-implementation was developed during Xinchen's Google internship; If you find a bug, please file a bug including @xcyan.

Citation

If you find this useful, please cite our work as follows:

@incollection{NIPS2016_6206,
title = {Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision},
author = {Yan, Xinchen and Yang, Jimei and Yumer, Ersin and Guo, Yijie and Lee, Honglak},
booktitle = {Advances in Neural Information Processing Systems 29},
editor = {D. D. Lee and M. Sugiyama and U. V. Luxburg and I. Guyon and R. Garnett},
pages = {1696--1704},
year = {2016},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/6206-perspective-transformer-nets-learning-single-view-3d-object-reconstruction-without-3d-supervision.pdf}
}


上一篇: PyTorch_RealNVP_2D_toy_dataset

下一篇:PTN-Ninja

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...