SqueezeNet has AlexNet-level accuracy with 50x fewer parameters.
which reduces the computational time for Neural Style and yet has
comparable results. It is also included in Torchvison so no explicit
download is needed. Simply run the implementation will automatically
download a 5MB SqueezeNet from Pytorch website.
Notice that this implementation aims at small size and fast training time on less capable machine like my own Laptop, it should not beat
the result of using pretrained VGG as used in the original paper. No
GPU is needed for this implementation. At test time it takes less than 1.20s for each training step for a 616x372 image on a intel core i5 Laptop.
use --standard-train Trueto perform a suggested training
process, which is 500 steps Adam of learning rate 0.1 fellowed by
another 500 steps Adam of learning rate 0.01
You can run the Ipython Notebook Version. This is convenient if you
want play with some parameters, you can even try different layer of
SqueezeNet.This implementation only used 7 layers from it's 12 defined