SRDenseNet-Caffe
This is the implementation of paper: "T. Tong, G. Li, X. Liu, et al., 2017. Image super-resolution using dense skip connections. ICCV, p.4809-4817." PDF
OS: CentOS 7 Linux kernel 3.10.0-514.el7.x86_64
CPU: Intel Xeon(R) CPU E5-2667 v4 @ 3.20GHz x 32
Memory: 251.4 GB
GPU: NVIDIA Tesla P4, 8 GB
Cuda 8.0 (Cudnn installed)
Caffe (matcaffe interface required)
Python 2.7.5
Matlab 2017b
These datasets are the same as other paper provided. Readers can directly use them or download them from here:
BSDS100, BSDS200, General-100, Set5, Set14, T91, Train_291, Urban100, and DIV2K.
Copy the 'train' directory to 'Caffe_ROOT/examples/', and rename the directory to 'SRDenseNet'.
Prepare datasets into 'data' directory.
(optional) run 'data_aug.m' in Matlab for data augmentation; e.g., data_aug('data/BSDS200'), which will generates a new directory 'BSDS-200-aug'.
Run 'generate_train.m' and 'generate_test.m' in Matlab to generate 'train.h5' and 'test.h5'. (choose one or more datasets in advance)
(optional) Modify the parameters in 'create_SRDenseNet.py'.
Run in command line: 'python create_SRDenseNet.py'. It will regenerate 'train_net.prototxt' and 'test_net.prototxt'.
(optional) modify the parametes in 'solver.prototxt'.
Run in command line './examples/SRDenseNet/train.sh' at Caffe_ROOT path.
Waiting for the training procedure completed.
net: "examples/SRDenseNet/train_net.prototxt"
test iteration: 1000
test interval: 100
base learning rate: 1e-4
learning policy: "step"
gamma: 0.5
stepsize: 10000
momentum: 0.9
weight decay: 1e-4
display interval: 100
maximum iteration: 100000
snapshot: 1000
snapshot_prefix: "examples/SRDenseNet/model/snapshot"
solver mode: GPU
optimization method: "Adam"
Prepare datasets into 'data' directory.
Copy 'test_net.prototxt' from training directory to 'test' directory.
Copy '*.caffemodel' from training directory to 'test/model' directory.
Modify some paths in 'test_SRDenseNet.m' if necessary.
Run 'test_SRDenseNet.m' in Matlab.
Metrics will be printed and reconstrcuted images will be saved into 'result' directory.
scale: 4
batch size: 32 (train), 2 (test)
number of feature maps of the first convolutional layer: 8
number of blocks: 8
number of convolutional layers in one block: 8
growth rate: 16
number of feature maps of the bottleneck layer: 256
dropout: 0.0
Each convolution or deconvolution layer is followed by an ReLU layer, except the final reconstruction layer
Convolution layer: kernel=3, stride=1, pad=1
Bottoleneck layer: kernel=1, stride=1, pad=0
Deconvolution layer: kernel=4, stride=2, pad=1
Loss: Euclidean (L2)
Readers can use 'Netscope' to visualize the network architecture
We provide a pretrained SRDenseNet x4 model trained on BSDS200, T91, and Train_291 datasets. All images are cropped to 100x100 subimages witn non-overlap. Following previous methods, super-resolution is applied only in the luminance channel in YCbCr space.
Performance in terms of PSNR/SSIM on datasets Set5, Set14, BSD100, and Urban100
DataSet/Method | Bicubic interpolation | SRDenseNet |
---|---|---|
Set5 | 28.42/0.8103 | 31.28/0.8807 |
Set14 | 26.00/0.7018 | 27.97/0.7658 |
BSDS100 | 25.96/0.6674 | 27.23/0.7233 |
Urban100 | 23.14/0.6570 | 25.04/0.7485 |
Note: our results are not as good as those presented in paper. Hence, our code needs further improvement.
If you have any suggestion or question, please do not hesitate to contact me.
Ph.D. candidate, Shengke Xue
College of Information Science and Electronic Engineering
Zhejiang University, Hangzhou, P.R. China
Email: xueshengke@zju.edu.cn, xueshengke1993@gmail.com
还没有评论,说两句吧!
热门资源
Keras-ResNeXt
Keras ResNeXt Implementation of ResNeXt models...
seetafaceJNI
项目介绍 基于中科院seetaface2进行封装的JAVA...
spark-corenlp
This package wraps Stanford CoreNLP annotators ...
capsnet-with-caps...
CapsNet with capsule-wise convolution Project ...
inferno-boilerplate
This is a very basic boilerplate example for pe...
智能在线
400-630-6780
聆听.建议反馈
E-mail: support@tusaishared.com