This code was written with PyTorch<0.4, but most people must be using PyTorch>=0.4 today. Migrating the code is easy. Please refer to PyTorch 0.4.0 Migration Guide.
If you've already built the training and validation dataset (i.e. train.h5 & val.h5 files), set preprocess to be False.
According to the paper, DnCNN-S has 17 layers.
noiseL is used for training and val_noiseL is used for validation. They should be set to the same value for unbiased validation. You can set whatever noise level you need.
3. Train DnCNN-B (DnCNN with blind noise level)
python train.py
--preprocess True
--num_of_layers 20
--mode B
--val_noiseL 25
NOTE
If you've already built the training and validation dataset (i.e. train.h5 & val.h5 files), set preprocess to be False.
According to the paper, DnCNN-B has 20 layers.
noiseL is ingnored when training DnCNN-B. You can set val_noiseL to whatever you need.
The definition of loss function Set size_average to be False when defining the loss function. When size_average=True, the pixel-wise average will be computed, but what we need is sample-wise average.
criterion = nn.MSELoss(size_average=False)
The computation of loss will be like:
loss = criterion(out_train, noise) / (imgn_train.size()[0]*2)
where we divide the sum over one batch of samples by 2N, with N being # samples.