See my related blog post for an overview of the algorithm for real-time style transfer.
The total loss used is the weighted sum of the style loss, the content loss and a total variation loss. This third component is not specfically mentioned in the original paper but leads to more cohesive images being generated.
To run the style transfer with a GPU run with the --use-gpu flag.
I have made the pre-trained networks for the 3 styles shown in the results section below available. They can be downloaded from here (~700MB).
Results
I trained three networks style transfers using the following three style images:
Each network was trained with 80,000 training images taken from the Microsoft COCO dataset and resized to 256×256 pixels. Training was carried out for 100,000 iterations with a batch size of 4 and took approximately 12 hours on a GTX 1080 GPU. Using the trained network to generate style transfers took approximately 5 seconds on a CPU. Here are some of the style transfers I was able to generate: