资源算法chainer-gogh

chainer-gogh

2020-02-20 | |  29 |   0 |   0

chainer-gogh

Implementation of "A neural algorithm of Artistic style" (http://arxiv.org/abs/1508.06576) in Chainer. The Japanese readme can be found here.

Usage:

Install Chainer

pip install chainer

See https://github.com/pfnet/chainer for details.

Download the model(s)

There are multiple models to chose from:

Simply specify: (-m nin)

With VGG, it takes a long time to make good looking images. (-m vgg-m vgg_chainer)

After downloading and using the vgg_chainer model for the first time, all subsequent uses will load the model very fast.(functionality available in chainer 1.19 and above).

About the same as NIN, but there should be potential for good images. The optimum parameters are unknown. (-m googlenet)

Lightweight compared to VGG, should be good for illustrations/anime drawings. Optimal parameters are unknown. (-m i2v)

Run on CPU

python chainer-gogh.py -m nin -i input.png -s style.png -o output_dir -g -1

Run on GPU

python chainer-gogh.py -m nin -i input.png -s style.png -o output_dir -g <GPU number>

Stylize an image with VGG

python chainer-gogh.py -m vgg_chainer -i input.png -s style.png -o output_dir -g 0 --width 256

How to specify the model

-m nin

It is possible to change from nin to vgg, vgg_chainer, googlenet or i2v. To do this, put the model file in the working directory, keeping the default file name.

Generate multiple images simultaneously

  • First, create a file called input.txt and list the input and output file names:

input0.png style0.png
input1.png style1.png
...

then, run chainer-gogh-multi.py:

python chainer-gogh-multi.py -i input.txt

The VGG model uses a lot of GPU memory, be careful!

About the parameters

  • --lr: learning rate. Increase this when the generation progress is slow.

  • --lam: increase to make the output image similar to the input, decrease to add more style.

  • alpha, beta: coefficients relating to the error propagated from each layer. They are hard coded for each model.

Advice

  • At the moment, using square images (e.g. 256x256) is best.


上一篇:my_qanet_implementation

下一篇:Everyone_Is_Van_Gogh

用户评价
全部评价

热门资源

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • shih-styletransfer

    shih-styletransfer Code from Style Transfer ...