资源算法SqueezeNet-Neural-Style-Pytorch

SqueezeNet-Neural-Style-Pytorch

2020-01-09 | |  33 |   0 |   0

Lightweight Neural Style on Pytorch

This is a lightweight Pytorch implementation of A Neural Algorithm of Artistic Style using pretrained SqueezeNet.

SqueezeNet has AlexNet-level accuracy with 50x fewer parameters. which reduces the computational time for Neural Style and yet has comparable results. It is also included in Torchvison so no explicit download is needed. Simply run the implementation will automatically download a 5MB SqueezeNet from Pytorch website.

Notice that this implementation aims at small size and fast training time on less capable machine like my own Laptop, it should not beat the result of using pretrained VGG as used in the original paper. No GPU is needed for this implementation. At test time it takes less than 1.20s for each training step for a 616x372 image on a intel core i5 Laptop.

Examples:

图片.png

图片.png

Usage

Basic usage:

python neural_style.py --style <image.jpg> --content <image.jpg>

use --standard-train Trueto perform a suggested training process, which is 500 steps Adam of learning rate 0.1 fellowed by another 500 steps Adam of learning rate 0.01

You can run the Ipython Notebook Version. This is convenient if you want play with some parameters, you can even try different layer of SqueezeNet.This implementation only used 7 layers from it's 12 defined

Advanced Neural Style

for more advanced implementation on good GPU machine please check Fast-Neural-style: Perceptual Losses for Real-Time Style Transfer and Super-Resolution, codes are here: torch or tensorflow.


上一篇:pytorch_Squeezenet

下一篇:SqueezeNet-tf

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...