资源论文A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations

A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations

2020-03-20 | |  96 |   53 |   0

Abstract

Backpropagation-based visualizations have been proposed to interpret convolutional neural networks (CNNs), however a theory is missing to justify their behaviors: Guided backpropagation (GBP) and deconvolutional network (DeconvNet) generate more human-interpretable but less classsensitive visualizations than saliency map. Motivated by this, we develop a theoretical explanatio revealing that GBP and DeconvNet are essentially doing (partial) image recovery which is unrelated to the network decisions. Specifically, our analysis shows that the backward ReLU introduced by GBP and DeconvNet, and the local connections in CNNs are the two main causes of compelling visualizations. Extensive experiments are provided that support the theoretical analysis.

上一篇:Limits of Estimating Heterogeneous Treatment Effects: Guidelines for Practical Algorithm Design

下一篇:Neural Autoregressive Flows

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...