Abstract
This paper proposes a multi-grid method for learning
energy-based generative ConvNet models of images. For
each grid, we learn an energy-based probabilistic model
where the energy function is defined by a bottom-up convolutional neural network (ConvNet or CNN). Learning such
a model requires generating synthesized examples from the
model. Within each iteration of our learning algorithm, for
each observed training image, we generate synthesized images at multiple grids by initializing the finite-step MCMC
sampling from a minimal 1 × 1 version of the training image. The synthesized image at each subsequent grid is obtained by a finite-step MCMC initialized from the synthesized image generated at the previous coarser grid. After
obtaining the synthesized examples, the parameters of the
models at multiple grids are updated separately and simultaneously based on the differences between synthesized and
observed examples. We show that this multi-grid method
can learn realistic energy-based generative ConvNet models, and it outperforms the original contrastive divergence
(CD) and persistent CD