dcgan_code
All images in this paper are generated by a neural network. They are NOT REAL.
Full paper here: http://arxiv.org/abs/1511.06434
###Other implementations of DCGAN
##Summary of DCGAN We
stabilize Generative Adversarial networks with some architectural constraints
Replace any pooling layers with strided convolutions (discriminator) and fractional-strided convolutions (generator).
Use batchnorm in both the generator and the discriminator
Remove fully connected hidden layers for deeper architectures. Just use average pooling at the end.
Use ReLU activation in generator for all layers except for the output, which uses Tanh.
Use LeakyReLU activation in the discriminator for all layers.
use the discriminator as a pre-trained net for CIFAR-10 classification and show pretty decent results.
generate really cool bedroom images that look super real
To convince you that the network is not cheating:
show the interpolated latent space, where transitions are really smooth and every image in the latent space is a bedroom.
show bedrooms after one epoch of training (with a 0.0002 learning rate), come on the network cant really memorize at this stage.
To explore what the representations that the network learnt,
Now you can control the generator to not output certain objects.
show deconvolution over the filters, to show that maximal activations occur at objects like windows and beds
figure out a way to identify and remove filters that draw windows in generation.
Because we are tripping
Smiling woman - neutral woman + neutral man = Smiling man. Whuttttt!
man with glasses - man without glasses + woman without glasses = woman with glasses. Omg!!!!
learnt a latent space in a completely unsupervised fashion where ROTATIONS ARE LINEAR in this latent space. WHHHAAATT????!!!!!!
Figure 11, trained on imagenet has a plane with bird legs. so cooool.
Generated bedrooms after five epochs of training. There appears to be evidence of visual under-fitting via repeated textures across multiple samples.
Generated bedrooms after one training pass through the dataset. Theoretically, the model could learn to memorize training examples, but this is experimentally unlikely as we train with a small learning rate and minibatch SGD. We are aware of no prior empirical evidence demonstrating memorization with SGD and a small learning rate in only one epoch.
Interpolation between a series of 9 random points in Z show that the space learned has smooth transitions, with every image in the space plausibly looking like a bedroom. In the 6th row, you see a room without a window slowly transforming into a room with a giant window. In the 10th row, you see what appears to be a TV slowly being transformed into a window.
Top row: un-modified samples from model. Bottom row: the same samples generated with dropping out ”window” filters. Some windows are removed, others are transformed into objects with similar visual appearance such as doors and mirrors. Although visual quality decreased, overall scene composition stayed similar, suggesting the generator has done a good job disentangling scene representation from object representation. Extended experiments could be done to remove other objects from the image and modify the objects the generator draws.
上一篇:DCGAN-tensorflow
下一篇:dcgan.torch
还没有评论,说两句吧!
热门资源
seetafaceJNI
项目介绍 基于中科院seetaface2进行封装的JAVA...
spark-corenlp
This package wraps Stanford CoreNLP annotators ...
Keras-ResNeXt
Keras ResNeXt Implementation of ResNeXt models...
capsnet-with-caps...
CapsNet with capsule-wise convolution Project ...
inferno-boilerplate
This is a very basic boilerplate example for pe...
智能在线
400-630-6780
聆听.建议反馈
E-mail: support@tusaishared.com