资源算法RetinaNet-and-SSD-in-PyTorch-Detectron

RetinaNet-and-SSD-in-PyTorch-Detectron

2020-03-05 | |  35 |   0 |   0

SSD: Single Shot MultiBox Object Detector, in PyTorch

PyTorch implementation of Single Shot MultiBox Detector from the 2016 paper by Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang, and Alexander C. Berg. The official and original Caffe code can be found here.

Table of Contents

       

Installation

  • Install PyTorch by selecting your environment on the website and running the appropriate command.

  • Clone this repository.

    • Note: We currently only support Python 3+.

  • Then download the dataset by following the instructions below.

  • We now support Visdom for real-time loss visualization during training!

    # First install Python server and clientpip install visdom# Start the server (probably in a screen or tmux)python -m visdom.server
    • Then (during training) navigate to http://localhost:8097/ (see the Train section below for training details).

    • To use Visdom in the browser:

  • Note: For training, we currently support VOC and COCO, and aim to add ImageNet support soon.

Datasets

To make things easy, we provide bash scripts to handle the dataset downloads and setup for you. We also provide simple dataset loaders that inherit torch.utils.data.Dataset, making them fully compatible with the torchvision.datasets API.

COCO

Microsoft COCO: Common Objects in Context

Download COCO 2014
# specify a directory for dataset to be downloaded into, else default is ~/data/sh data/scripts/COCO2014.sh

VOC Dataset

PASCAL VOC: Visual Object Classes

Download VOC2007 trainval & test
# specify a directory for dataset to be downloaded into, else default is ~/data/sh data/scripts/VOC2007.sh # <directory>
Download VOC2012 trainval
# specify a directory for dataset to be downloaded into, else default is ~/data/sh data/scripts/VOC2012.sh # <directory>

Training SSD

mkdir weightscd weights
wget https://s3.amazonaws.com/amdegroot-models/vgg16_reducedfc.pth
  • To train SSD using the train script simply specify the parameters listed in train.py as a flag or manually change them.

python train.py
  • Note:

    • For training, an NVIDIA GPU is strongly recommended for speed.

    • For instructions on Visdom usage/installation, see the Installation section.

    • You can pick-up training from a checkpoint by specifying the path as one of the training parameters (again, see train.py for options)

Evaluation

To evaluate a trained network:

python eval.py

You can specify the parameters listed in the eval.py file by flagging them or manually changing them.

Performance

VOC2007 Test

mAP
OriginalConverted weiliu89 weightsFrom scratch w/o data augFrom scratch w/ data aug
77.2 %77.26 %58.12%77.43 %
FPS

GTX 1060: ~45.45 FPS

Demos

Use a pre-trained SSD network for detection

Download a pre-trained network

SSD results on multiple datasets

Try the demo notebook

  • Make sure you have jupyter notebook installed.

  • Two alternatives for installing jupyter notebook:

    1. If you installed PyTorch with conda (recommended), then you should already have it. (Just navigate to the ssd.pytorch cloned repo and run): jupyter notebook

    2. If using pip:

# make sure pip is upgradedpip3 install --upgrade pip# install jupyter notebookpip install jupyter# Run this inside ssd.pytorchjupyter notebook

Try the webcam demo

  • Works on CPU (may have to tweak cv2.waitkey for optimal fps) or on an NVIDIA GPU

  • This demo currently requires opencv2+ w/ python bindings and an onboard webcam

    • You can change the default webcam in demo/live.py

  • Install the imutils package to leverage multi-threading on CPU:

    • pip install imutils

  • Running python -m demo.live opens the webcam and begins detecting!

TODO

We have accumulated the following to-do list, which we hope to complete in the near future

  • Still to come:

    •  Support for the MS COCO dataset

    •  Support for SSD512 training and testing

    •  Support for training on custom datasets

Authors

Note: Unfortunately, this is just a hobby of ours and not a full-time job, so we'll do our best to keep things up to date, but no guarantees. That being said, thanks to everyone for your continued help and feedback as it is really appreciated. We will try to address everything as soon as possible.

References


上一篇:detectron2-helpers

下一篇: Using_Detectron_Mask_R-CNN

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...