资源算法fast-style-transfer

fast-style-transfer

2020-02-10 | |  36 |   0 |   0

Real-Time Style Transfer

A TensorFlow implementation of real-time style transfer based on the paper Perceptual Losses for Real-Time Style Transfer and Super-Resolution by Johnson et. al.

Algorithm

See my related blog post for an overview of the algorithm for real-time style transfer.

The total loss used is the weighted sum of the style loss, the content loss and a total variation loss. This third component is not specfically mentioned in the original paper but leads to more cohesive images being generated.

Requirements

  • Python 2.7

  • TensorFlow 1.n

  • SciPy & NumPy

  • Download the pre-trained VGG network and place it in the top level of the repository (~500MB)

  • For training:

    • It is recommended to use a GPU to get good results within a reasonable timeframe

    • You will need an image dataset to train your networks. I used the Microsoft COCO dataset and resized the images to 256x256 pixels

  • Generation of styled images can be run on a CPU or GPU. Some pre-trained style networks can be download from here (~700MB)

Running the code

Training a network for a particular style

python train_network.py --style <style image> --train-path <path to training images> --save-path <directory to save network>

The algorithm will run with the following settings:

NUM_EPOCHS=5          # override with --epochs argumentBATCH_SIZE=4          # override with --batch-size argumentLEARNING_RATE = 1e-3  # override with --learning-rate argumentCONTENT_WEIGHT = 1  # override with --content-weight argumentSTYLE_WEIGHT = 5    # override with --style-weight argumentTV_WEIGHT = 1e-6       # override with --tv-weight argument

To train the network using a GPU run with the --use-gpu flag.

Using a trained network to generate a style transfer

python stylize_image.py --content <content image> --network-path <network directory> --output-path <output filename>

To run the style transfer with a GPU run with the --use-gpu flag.

I have made the pre-trained networks for the 3 styles shown in the results section below available. They can be downloaded from here (~700MB).

Results

I trained three networks style transfers using the following three style images:

图片.png

Each network was trained with 80,000 training images taken from the Microsoft COCO dataset and resized to 256×256 pixels. Training was carried out for 100,000 iterations with a batch size of 4 and took approximately 12 hours on a GTX 1080 GPU. Using the trained network to generate style transfers took approximately 5 seconds on a CPU. Here are some of the style transfers I was able to generate:


图片.png

图片.png

图片.png

Acknowledgements

This code was inspired by an existing TensorFlow implementation by Logan Engstrom, and I have re-used most of his transform network code here. The VGG network code is based on an existing implementation by Anish Anish Athalye

License

Released under GPLv3, see LICENSE.txt


上一篇:adaptive-style-transfer

下一篇:StyleTransfer

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...