资源论文Inverting Visual Representations with Convolutional Networks

Inverting Visual Representations with Convolutional Networks

2019-12-26 | |  77 |   55 |   0

Abstract

Feature representations, both hand-designed andlearned ones, are often hard to analyze and interpret, evenwhen they are extracted from visual data. We propose anew approach to study image representations by invertingthem with an up-convolutional neural network. We applythe method to shallow representations (HOG, SIFT, LBP),as well as to deep networks. For shallow representationsour approach provides significantly better reconstructionsthan existing methods, revealing that there is surprisinglyrich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.

上一篇:Part-Stacked CNN for Fine-Grained Visual Categorization

下一篇:Bilateral Space Video Segmentation

用户评价
全部评价

热门资源

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...