资源算法Faster-RCNN

Faster-RCNN

2019-08-19 | |  196 |   0 |   0

Faster-RCNN_TF

This is an experimental Tensorflow implementation of Faster RCNN - a convnet for object detection with a region proposal network. For details about R-CNN please refer to the paper Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks by Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun.

Requirements: software

1、Requirements for Tensorflow (see: Tensorflow)

2、Python packages you might not have: cython, python-opencv, easydict

Requirements: hardware

1、For training the end-to-end version of Faster R-CNN with VGG16, 3G of GPU memory is sufficient (using CUDNN)

Installation (sufficient for the demo)

1、Clone the Faster R-CNN repository

# Make sure to clone with --recursive
git clone --recursive https://github.com/smallcorgi/Faster-RCNN_TF.git

2、Build the Cython modules

cd $FRCN_ROOT/lib
make

Demo

After successfully completing basic installation, you'll be ready to run the demo.

Download model training on PASCAL VOC 2007 [Google Drive] [Dropbox]

To run the demo

Code accompanying the paper "Wasserstein GAN"

A few notes

The first time running on the LSUN dataset it can take a long time (up to an hour) to create the dataloader. After the first run a small cache file will be created and the process should take a matter of seconds. The cache is a list of indices in the lmdb database (of LSUN)

The only addition to the code (that we forgot, and will add, on the paper) are the lines 163-166 of main.py. These lines act only on the first 25 generator iterations or very sporadically (once every 500 generator iterations). In such a case, they set the number of iterations on the critic to 100 instead of the default 5. This helps to start with the critic at optimum even in the first iterations. There shouldn't be a major difference in performance, but it can help, especially when visualizing learning curves (since otherwise you'd see the loss going up until the critic is properly trained). This is also why the first 25 iterations take significantly longer than the rest of the training as well.

If your learning curve suddenly takes a big drop take a look at this. It's a problem when the critic fails to be close to optimum, and hence its error stops being a good Wasserstein estimate. Known causes are high learning rates and momentum, and anything that helps the critic get back on track is likely to help with the issue.

Prerequisites

Computer with Linux or OSX

PyTorch

For training, an NVIDIA GPU is strongly recommended for speed. CPU is supported but training is very slow.

Two main empirical claims:

Generator sample quality correlates with discriminator loss

Improved model stability

Reproducing LSUN experiments

With DCGAN:

python main.py --dataset lsun --dataroot [lsun-train-folder] --cuda

With MLP:

python main.py --mlp_G --ngf 512

Generated samples will be in the samples folder.

If you plot the value -Loss_D, then you can reproduce the curves from the paper. The curves from the paper (as mentioned in the paper) have a median filter applied to them:

med_filtered_loss = scipy.signal.medfilt(-Loss_D, dtype='float64'), 101)

More improved README in the works.


上一篇:Colorful Image Colorization

下一篇:MUSE

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...