资源算法SRDenseNet-Caffe

SRDenseNet-Caffe

2020-02-04 | |  44 |   0 |   0

SRDenseNet (Caffe)

This is the implementation of paper: "T. Tong, G. Li, X. Liu, et al., 2017. Image super-resolution using dense skip connections. ICCV, p.4809-4817." PDF

Environment

  • OS: CentOS 7 Linux kernel 3.10.0-514.el7.x86_64

  • CPU: Intel Xeon(R) CPU E5-2667 v4 @ 3.20GHz x 32

  • Memory: 251.4 GB

  • GPU: NVIDIA Tesla P4, 8 GB

Software

  • Cuda 8.0 (Cudnn installed)

  • Caffe (matcaffe interface required)

  • Python 2.7.5

  • Matlab 2017b

Dataset

These datasets are the same as other paper provided. Readers can directly use them or download them from here:

BSDS100BSDS200General-100Set5Set14T91Train_291Urban100, and DIV2K.

Train

  1. Copy the 'train' directory to 'Caffe_ROOT/examples/', and rename the directory to 'SRDenseNet'.

  2. Prepare datasets into 'data' directory.

  3. (optional) run 'data_aug.m' in Matlab for data augmentation; e.g., data_aug('data/BSDS200'), which will generates a new directory 'BSDS-200-aug'.

  4. Run 'generate_train.m' and 'generate_test.m' in Matlab to generate 'train.h5' and 'test.h5'. (choose one or more datasets in advance)

  5. (optional) Modify the parameters in 'create_SRDenseNet.py'.

  6. Run in command line: 'python create_SRDenseNet.py'. It will regenerate 'train_net.prototxt' and 'test_net.prototxt'.

  7. (optional) modify the parametes in 'solver.prototxt'.

  8. Run in command line './examples/SRDenseNet/train.sh' at Caffe_ROOT path.

  9. Waiting for the training procedure completed.

Parameters for training (saved in solver.prototxt)

  • net: "examples/SRDenseNet/train_net.prototxt"

  • test iteration: 1000

  • test interval: 100

  • base learning rate: 1e-4

  • learning policy: "step"

  • gamma: 0.5

  • stepsize: 10000

  • momentum: 0.9

  • weight decay: 1e-4

  • display interval: 100

  • maximum iteration: 100000

  • snapshot: 1000

  • snapshot_prefix: "examples/SRDenseNet/model/snapshot"

  • solver mode: GPU

  • optimization method: "Adam"

Test

  1. Prepare datasets into 'data' directory.

  2. Copy 'test_net.prototxt' from training directory to 'test' directory.

  3. Copy '*.caffemodel' from training directory to 'test/model' directory.

  4. Modify some paths in 'test_SRDenseNet.m' if necessary.

  5. Run 'test_SRDenseNet.m' in Matlab.

  6. Metrics will be printed and reconstrcuted images will be saved into 'result' directory.

Network architecture

  • scale: 4

  • batch size: 32 (train), 2 (test)

  • number of feature maps of the first convolutional layer: 8

  • number of blocks: 8

  • number of convolutional layers in one block: 8

  • growth rate: 16

  • number of feature maps of the bottleneck layer: 256

  • dropout: 0.0

  • Each convolution or deconvolution layer is followed by an ReLU layer, except the final reconstruction layer

  • Convolution layer: kernel=3, stride=1, pad=1

  • Bottoleneck layer: kernel=1, stride=1, pad=0

  • Deconvolution layer: kernel=4, stride=2, pad=1

  • Loss: Euclidean (L2)

Readers can use 'Netscope' to visualize the network architecture

Performance

We provide a pretrained SRDenseNet x4 model trained on BSDS200, T91, and Train_291 datasets. All images are cropped to 100x100 subimages witn non-overlap. Following previous methods, super-resolution is applied only in the luminance channel in YCbCr space.

Performance in terms of PSNR/SSIM on datasets Set5, Set14, BSD100, and Urban100

DataSet/MethodBicubic interpolationSRDenseNet
Set528.42/0.810331.28/0.8807
Set1426.00/0.701827.97/0.7658
BSDS10025.96/0.667427.23/0.7233
Urban10023.14/0.657025.04/0.7485

Note: our results are not as good as those presented in paper. Hence, our code needs further improvement.

If you have any suggestion or question, please do not hesitate to contact me.

Contact

Ph.D. candidate, Shengke Xue

College of Information Science and Electronic Engineering

Zhejiang University, Hangzhou, P.R. China

Email: xueshengke@zju.edu.cnxueshengke1993@gmail.com


上一篇:pytorch-SRDenseNet

下一篇:SR_SRDenseNet_tensorflow

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...