Abstract
Deep Neural Networks trained as image auto-encoders
have recently emerged as a promising direction for advancing the state-of-the-art in image compression. The key challenge in learning such networks is twofold: To deal with
quantization, and to control the trade-off between reconstruction error (distortion) and entropy (rate) of the latent
image representation. In this paper, we focus on the latter
challenge and propose a new technique to navigate the ratedistortion trade-off for an image compression auto-encoder.
The main idea is to directly model the entropy of the latent
representation by using a context model: A 3D-CNN which
learns a conditional probability model of the latent distribution of the auto-encoder. During training, the auto-encoder
makes use of the context model to estimate the entropy of its
representation, and the context model is concurrently updated to learn the dependencies between the symbols in the
latent representation. Our experiments show that this approach, when measured in MS-SSIM, yields a state-of-theart image compression system based on a simple convolutional auto-encoder.