资源论文Learning to Explain: An Information-Theoretic Perspective on Model Interpretation

Learning to Explain: An Information-Theoretic Perspective on Model Interpretation

2020-03-16 | |  61 |   37 |   0

Abstract

We introduce instancewise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informativ for each given example. This feature selector is trained to maximize the mutual information between selected features and the response variable, where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitativ metrics and human evaluation.

上一篇:Convergent T REE BACKUP and R ETRACE with Function Approximation

下一篇:Semi-Supervised Learning via Compact Latent Space Clustering

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...