This repository contains a PyTorch implementation of the following paper:
A Style-Based Generator Architecture for Generative Adversarial Networks Tero Karras (NVIDIA), Samuli Laine (NVIDIA), Timo Aila (NVIDIA) http://stylegan.xyz/paper
Abstract:We propose an alternative generator
architecture for generative adversarial networks, borrowing from style
transfer literature. The new architecture leads to an automatically
learned, unsupervised separation of high-level attributes (e.g., pose
and identity when trained on human faces) and stochastic variation in
the generated images (e.g., freckles, hair), and it enables intuitive,
scale-specific control of the synthesis. The new generator improves the
state-of-the-art in terms of traditional distribution quality metrics,
leads to demonstrably better interpolation properties, and also better
disentangles the latent factors of variation. To quantify interpolation
quality and disentanglement, we propose two new, automated methods that
are applicable to any generator architecture. Finally, we introduce a
new, highly varied and high-quality dataset of human faces.
Picture: These people are not real – they were produced by our generator
that allows control over different aspects of the image.
Motivation
To the best of my knowledge, there is still not a similar pytorch 1.0 implementation of styleGAN as NvLabs released(Tensorflow),
therefore, i wanna implement it on pytorch1.0.1 to extend its usage in pytorch community.
Notice
@date: 2019.10.21
@info: The noteworthy thing I just ignore to highlight is you need to change default Star dataset to your own dataset (such as FFHQ or others) in opts.py. Sorry for my carelessness for this.
# ① pass your own dataset of training, batchsize and common settings in TrainOpts of `opts.py`.# ② run train_stylegan.pypython3 train_stylegan.py# ③ you can get intermediate pics generated by stylegenerator in `opts.det/images/`
Project
we follow the release code of styleGAN carefully and if you found any bug or mistake in implementation,
please tell us and improve it, thank u very much!
Finished
blur2d mechanism. (a step which takes much gpu memory and if you don't have enough resouces, please set it to None.)
truncation tricks.
Two kind of upsample method in G_synthesis.
Two kind of downsample method in StyleDiscriminator.
Our code can run 1024 x 1024 resolution image generation task on 1080Ti, if you have stronger graphic card or GPU, then
you may train your model with large batchsize and self-define your multi-gpu version of this code.
My Email is samuel.gao023@gmail.com, if you have any question and wanna to PR, please let me know, thank you.