资源算法lddp-tf-faster-rcnn

lddp-tf-faster-rcnn

2020-02-19 | |  87 |   0 |   0

Learning detection with diverse proposals

A Tensorflow implementation of Learning detection with diverse proposals by Nuri Kim. This repository is based on the Faster R-CNN implementation available here.

Performance

Here, Tested on VOC 2007 test set and VGG16 is used as a backbone network. The crowd sets consist of images containing at least one object having overlap with other object in the same category over the threshold (0.3).

Trained with VOC2007 trainval set:

MethodmAPmAP on Crowd
Faster R-CNN71.4%57.7%
Faster R-CNN + LDPP70.9%61.8%

Trained with VOC0712 trainval set:

MethodmAPmAP on Crowd
Faster R-CNN75.8%62.0%
Faster R-CNN + LDPP76.6%64.5%

Prerequisites

  • A basic Tensorflow installation. I used tensorflow 1.7.

  • Python packages you might not have: cythonopencv-pythoneasydict (similar to py-faster-rcnn).

Installation

  1. Clone the repository

git clone https://github.com/bareblackfoot/lddp-tf-faster-rcnn.git
  1. Build the Cython modules

make clean
makecd ..
  1. Install the Python COCO API. The code requires the API to access COCO dataset.

cd data
git clone https://github.com/pdollar/coco.gitcd coco/PythonAPI
makecd ../../..

Setup data

Please follow the instructions of py-faster-rcnn here to setup VOC and COCO datasets (Part of COCO is done). The steps involve downloading data and optionally creating soft links in the data folder. Since faster RCNN does not rely on pre-computed proposals, it is safe to ignore the steps that setup proposals.

Test with pre-trained models

  1. Download pre-trained model

  • Google drive here. if you want to test the model trained on VOC 2007, the trained model is here.

  1. Create a folder and a soft link to use the pre-trained model

NET=res101
TRAIN_IMDB=voc_2007_trainval+voc_2012_trainval
mkdir -p output/${NET}/${TRAIN_IMDB}cd output/${NET}/${TRAIN_IMDB}ln -s ../../../data/voc_2007_trainval+voc_2012_trainval ./defaultcd ../../..
  1. Test with pre-trained VGG16 models

GPU_ID=0
./experiments/scripts/test_lddp.sh $GPU_ID pascal_voc_0712 vgg16

Train your own model

  1. Download pre-trained models and weights. The current code support VGG16 and Resnet V1 models. Pre-trained models are provided by slim, you can get the pre-trained models here and set them in the data/imagenet_weights folder. For example for VGG16 model, you can set up like:

    mkdir -p data/imagenet_weightscd data/imagenet_weights
    wget -v http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz
    tar -xzvf vgg_16_2016_08_28.tar.gz
    mv vgg_16.ckpt vgg16.ckptcd ../..

    For Resnet101, you can set up like:

    mkdir -p data/imagenet_weightscd data/imagenet_weights
    wget -v http://download.tensorflow.org/models/resnet_v1_101_2016_08_28.tar.gz
    tar -xzvf resnet_v1_101_2016_08_28.tar.gz
    mv resnet_v1_101.ckpt res101.ckptcd ../..
  2. Train (and test, evaluation)

./experiments/scripts/train_lddp.sh [GPU_ID] [DATASET] [NET]# GPU_ID is the GPU you want to test on# NET in {vgg16, res50, res101, res152} is the network arch to use# DATASET {pascal_voc, pascal_voc_0712, coco} is defined in train_lddp.sh# Examples:./experiments/scripts/train_lddp.sh 0 pascal_voc_0712 vgg16
./experiments/scripts/train_lddp.sh 1 coco res101
  1. Visualization with Tensorboard

tensorboard --logdir=tensorboard/vgg16/voc_2007_trainval/ --port=7001 &tensorboard --logdir=tensorboard/vgg16/coco_2014_train+coco_2014_valminusminival/ --port=7002 &
  1. Test and evaluate

./experiments/scripts/test_lddp.sh [GPU_ID] [DATASET] [NET]# GPU_ID is the GPU you want to test on# NET in {vgg16, res50, res101, res152} is the network arch to use# DATASET {pascal_voc, pascal_voc_0712, coco} is defined in test_lddp.sh# Examples:./experiments/scripts/test_lddp.sh 0 pascal_voc vgg16
./experiments/scripts/test_lddp.sh 1 coco res101
  1. You can use tools/reval.sh for re-evaluation

By default, trained networks are saved under:

output/[NET]/[DATASET]/default/

Test outputs are saved under:

output/[NET]/[DATASET]/default/[SNAPSHOT]/

Tensorboard information for train and validation is saved under:

tensorboard/[NET]/[DATASET]/default/
tensorboard/[NET]/[DATASET]/default_val/


上一篇:VGG19_py_faster_rcnn

下一篇:Faster_RCNN_OpenCV_Python

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...