资源算法residual-flows

residual-flows

2020-02-25 | |  41 |   0 |   0

Residual Flows for Invertible Generative Modeling [arxiv]

Building on the use of Invertible Residual Networks in generative modeling, we propose:

  • Unbiased estimation of the log-density of samples.

  • Memory-efficient reformulation of the gradients.

  • LipSwish activation function.

As a result, Residual Flows scale to much larger networks and datasets.

Requirements

  • PyTorch 1.0+

  • Python 3.6+

Preprocessing

ImageNet:

  1. Follow instructions in preprocessing/create_imagenet_benchmark_datasets.

  2. Convert .npy files to .pth using preprocessing/convert_to_pth.

  3. Place in data/imagenet32 and data/imagenet64.

CelebAHQ 64x64 5bit:

  1. Download from https://github.com/aravindsrinivas/flowpp/tree/master/flows_celeba.

  2. Convert .npy files to .pth using preprocessing/convert_to_pth.

  3. Place in data/celebahq64_5bit.

CelebAHQ 256x256:

# Download Glow's preprocessed dataset.
wget https://storage.googleapis.com/glow-demo/data/celeba-tfr.tar
tar -C data/celebahq -xvf celeb-tfr.tar
python extract_celeba_from_tfrecords

Density Estimation Experiments

MNIST:

python train_img.py --data mnist --imagesize 28 --actnorm True --wd 0 --save experiments/mnist

CIFAR10:

python train_img.py --data cifar10 --actnorm True --save experiments/cifar10

ImageNet 32x32:

python train_img.py --data imagenet32 --actnorm True --nblocks 32-32-32 --save experiments/imagenet32

ImageNet 64x64:

python train_img.py --data imagenet64 --imagesize 64 --actnorm True --nblocks 32-32-32 --factor-out True --squeeze-first True --save experiments/imagenet64

CelebAHQ 256x256:

python train_img.py --data celebahq --imagesize 256 --nbits 5 --actnorm True --act elu --batchsize 8 --update-freq 5 --n-exact-terms 8 --fc-end False --factor-out True --squeeze-first True --nblocks 16-16-16-16-16-16 --save experiments/celebahq256

Pretrained Models

Model checkpoints can be downloaded from releases.

Use the argument --resume [checkpt.pth] to evaluate or sample from the model.

Each checkpoint contains two sets of parameters, one from training and one containing the exponential moving average (EMA) accumulated over the course of training. Scripts will automatically use the EMA parameters for evaluation and sampling.

BibTeX

@inproceedings{chen2019residualflows,
  title={Residual Flows for Invertible Generative Modeling},
  author={Chen, Ricky T. Q. and Behrmann, Jens and Duvenaud, David and Jacobsen, J{"{o}}rn{-}Henrik},
  booktitle = {Advances in Neural Information Processing Systems},
  year={2019}
}


上一篇:sylvester-flows

下一篇:django-flows

用户评价
全部评价

热门资源

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • shih-styletransfer

    shih-styletransfer Code from Style Transfer ...