DRRN-pytorch
This is an unofficial implementation of "Deep Recursive Residual Network for Super Resolution (DRRN)", CVPR 2017 in Pytorch. [Paper]
You can get the official Caffe implementation here.
Usage
Training
usage: main.py [-h] [--batchSize BATCHSIZE] [--nEpochs NEPOCHS] [--lr LR]
[--step STEP] [--cuda] [--resume RESUME]
[--start-epoch START_EPOCH] [--clip CLIP] [--threads THREADS]
[--momentum MOMENTUM] [--weight-decay WEIGHT_DECAY]
[--pretrained PRETRAINED]
optional arguments:
-h, --help Show this help message and exit
--batchSize Training batch size
--nEpochs Number of epochs to train for
--lr Learning rate. Default=0.1
--step Learning rate decay, Default: n=5 epochs
--cuda Use cuda?
--resume Path to checkpoint
--clip Clipping Gradients. Default=0.01
--threads Number of threads for data loader to use Default=1
--momentum Momentum, Default: 0.9
--weight-decay Weight decay, Default: 1e-4
--pretrained Path to the pretrained model, used for weight initialization (default: none)
Evaluation
usage: eval.py [-h] [--cuda] [--model MODEL] [--dataset DATASET]
[--scale SCALE]
PyTorch DRRN Evaluation
optional arguments:
-h, --help show this help message and exit
--cuda use cuda?
--model MODEL model path
--dataset DATASET dataset name, Default: Set5
An example of training usage is shown as follows:
python eval.py --cuda
Prepare Training dataset
Performance
We provide a rough pre-trained DRRN_B1U25 model trained on 291 images with data augmentation. The model can achieve a better performance with a smart optimization strategy. For the DRRN_B1U9 implementation, you can manually modify the number of recursive blocks here.
The same adjustable gradient clipping's implementation as original paper.
No bias is used in this implementation.
No batch normalization is used in this implementation.
Performance in PSNR on Set5
| Scale | DRRN_B1U25 Paper | DRRN_B1U25 PyTorch| | -------:| ----------------:| -----------------:| | x2 | 37.74 | 37.69 | | x3 | 34.03 | 34.02 | | x4 | 31.68 | 31.70 |