Abstract
Feature representations, both hand-designed andlearned ones, are often hard to analyze and interpret, evenwhen they are extracted from visual data. We propose anew approach to study image representations by invertingthem with an up-convolutional neural network. We applythe method to shallow representations (HOG, SIFT, LBP),as well as to deep networks. For shallow representationsour approach provides significantly better reconstructionsthan existing methods, revealing that there is surprisinglyrich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.