资源算法NVIDIA/semantic-segmentation

NVIDIA/semantic-segmentation

2019-09-10 | |  140 |   0 |   0

Improving Semantic Segmentation via Video Prediction and Label Relaxation

Project | Paper | YouTube | Cityscapes Score | Kitti Score

PyTorch implementation of our CVPR2019 paper (oral) on achieving state-of-the-art semantic segmentation results using Deeplabv3-Plus like architecture with a WideResNet38 trunk. We present a video prediction-based methodology to scale up training sets by synthesizing new training samples and propose a novel label relaxation technique to make training objectives robust to label noise.

Improving Semantic Segmentation via Video Propagation and Label Relaxation
Yi Zhu1, Karan Sapra2

Fitsum A. Reda2, Kevin J. Shih2, Shawn Newsam1, Andrew Tao2Bryan Catanzaro2
1UC Merced, 2NVIDIA Corporation
In CVPR 2019 (* equal contributions).

SDCNet: Video Prediction using Spatially Displaced Convolution
Fitsum A. Reda, Guilin Liu, Kevin J. Shih, Robert Kirby, Jon Barker, David Tarjan, Andrew Tao, Bryan Catanzaro
NVIDIA Corporation
In ECCV 2018.

alt text

Installation

# Get Semantic Segmentation source code
git clone --recursive https://github.com/NVIDIA/semantic-segmentation.git
cd semantic-segmentation

# Build Docker Image
docker build -t nvidia-segmgentation -f Dockerfile .

If you prefer not to use docker, you can manually install the following requirements:

  • An NVIDIA GPU and CUDA 9.0 or higher. Some operations only have gpu implementation.

  • PyTorch (>= 0.5.1)

  • Python 3

  • numpy

  • sklearn

  • h5py

  • scikit-image

  • pillow

  • piexif

  • cffi

  • tqdm

  • dominate

  • tensorboardX

  • opencv-python

  • nose

  • ninja

We are working on providing a detail report, please bear with us.
To propose a model or change for inclusion, please submit a pull request.

Multiple GPU training and mixed precision training are supported, and the code provides examples for training and inference.
For more help, type

python3 train.py --help

Network architectures

Our repo now supports DeepLabV3+ architecture with different backbones, including WideResNet38SEResNeXt(50, 101) and ResNet(50,101).

Pre-trained Models

We've included pre-trained models. Download checkpoints to a folder pretrained_models.

Data Loaders

Dataloaders for Cityscapes, Mapillary, Camvid and Kitti are available in datasets.

Running the code

Dataloader: To run the code you will have to change the datapath location in config.py for your data. Model Arch: You can change the architecture name using --arch flag available in train.py.

Pre-Training on Mapillary

First, you can pre-train a DeepLabV3+ model with SEResNeXt(50)-Stride8 trunk on Mapillary dataset. Set __C.DATASET.MAPILLARY_DIR in config.py to where you store the Mapillary data. We use the research edition dataset, which you can request from here.

``` ./scripts/train_mapillary.sh

## Fine-tuning on Cityscapes 
Once you have the Mapillary pre-trained model (training mIoU should be 50+), you can start fine-tuning the model on Cityscapes dataset. Set `__C.DATASET.CITYSCAPES_DIR` in `config.py` to where you store the Cityscapes data. Your training mIoU in the end should be 80+.

./scripts/train_cityscapes.sh

## Inference

Our inference code supports two ways of evaluation: pooling and sliding based eval. The pooling based eval is faster than sliding based eval but provides slightly lower numbers. We use `sliding` as default. 
 ```
 ./scripts/eval_cityscapes.sh <weight_file_location> <result_save_location>
 ```

For submitting to Cityscapes benchmark, we simply change it to multi-scale setting and use WideResNet38 as the trunk. 
 ```
 ./scripts/submit_cityscapes.sh <weight_file_location> <result_save_location>
 ```

In the `result_save_location` you set, you will find several folders: `rgb`, `pred`, `compose` and `diff`. `rgb` contains the color-encode predicted segmentation masks. `pred` contains what you need to submit to the evaluation server, simply zip it and upload. `compose` contains the overlapped images of original video frame and the color-encode predicted segmentation masks. `diff` contains the difference between our prediction and the ground truth. For the test submission, there is nothing in the `diff` folder because we don't have ground truth. 

Right now, our inference code only supports Cityscapes dataset.  

# Dataset augmentation

At this point, you can already achieve top performance on Cityscapes benchmark (83+ mIoU). In order to further boost the segmentation performance, we can use the augmented dataset to help model's generalization capibility. 

## Label Propagation using Video Prediction 
First, you need to donwload the Cityscapes sequence dataset. Note that the sequence dataset is very large (a 325GB .zip file). Then we can use video prediction model to propagate GT segmentation masks to adjacent video frames, so that we can have more annotated image-label pairs during training.

cd ./sdcnet

bash flownet2_pytorch/install.sh

./_aug.sh

By default, we predict five past frames and five future frames, which effectively enlarge the dataset 10 times. If you prefer to propagate less or more time steps, you can change the `--propagate` accordingly. Enjoy the augmented dataset. 


## Results on Cityscapes

![alt text](images/vis.png)

# Training IOU

Training results for WideResnet38 and SEResnext50 trained in fp16 on DGX-1 (8-GPU V100)

<table class="tg">
  <tr>
    <th class="tg-t2cw">Model Name</th>
    <th class="tg-t2cw">Mean IOU</th>
    <th class="tg-t2cw">Training Time</th>
  </tr>
  <tr>
    <td class="tg-rg0h">DeepWV3Plus(no sdc-aug)</td>
    <td class="tg-rg0h">81.4</td>
    <td class="tg-rg0h">~14 hrs</td>
  </tr>
  <tr>
    <td class="tg-rg0h">DeepSRNX50V3PlusD_m1(no sdc-aug)</td>
    <td class="tg-rg0h">80.0</td>
    <td class="tg-rg0h">~9 hrs</td>
  </tr>
</table>


## Reference 

If you find this implementation useful in your work, please acknowledge it appropriately and cite the paper or code accordingly:

@inproceedings{semantic_cvpr19, author = {Yi Zhu, Karan Sapra, Fitsum A. Reda, Kevin J. Shih, Shawn Newsam, Andrew Tao, Bryan Catanzaro}, title = {Improving Semantic Segmentation via Video Propagation and Label Relaxation}, booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2019}, url = {https://nv-adlr.github.io/publication/2018-Segmentation} } * indicates equal contribution

@inproceedings{reda2018sdc, title={SDC-Net: Video prediction using spatially-displaced convolution}, author={Reda, Fitsum A and Liu, Guilin and Shih, Kevin J and Kirby, Robert and Barker, Jon and Tarjan, David and Tao, Andrew and Catanzaro, Bryan}, booktitle={Proceedings of the European Conference on Computer Vision (ECCV)}, pages={718--733}, year={2018} } ``` We encourage people to contribute to our code base and provide suggestions, point any issues, or solution using merge request, and we hope this repo is useful.

Acknowledgments

Parts of the code were heavily derived from pytorch-semantic-segmentationinplace-abnPytorchClementPinard/FlowNetPytorch and Cadene.

Our initial models used SyncBN from Synchronized Batch Norm but since then have been ported to Apex SyncBN developed by Jie Jiang.

We would also like to thank Ming-Yu Liu and Peter Kontschieder.

Coding Style

  • 4 spaces for indentation rather than tabs

  • 100 character line length

  • PEP8 formatting


上一篇:neural-combinatorial-rl-pytorch

下一篇:samplernn-pytorch

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...