资源论文Neural Interaction Transparency (NIT): Disentangling Learned Interactions for Improved Interpretability

Neural Interaction Transparency (NIT): Disentangling Learned Interactions for Improved Interpretability

2020-02-14 | |  40 |   33 |   0

Abstract

 Neural networks are known to model statistical interactions, but they entangle the interactions at intermediate hidden layers for shared representation learning. We propose a framework, Neural Interaction Transparency (NIT), that disentangles the shared learning across different interactions to obtain their intrinsic lower-order and interpretable structure. This is done through a novel regularizer that directly penalizes interaction order. We show that disentangling interactions reduces a feedforward neural network to a generalized additive model with interactions, which can lead to transparent models that perform comparably to the state-of-theart models. NIT is also flexible and efficient; it can learn generalized additive models with maximum K-order interactions by training only O(1) models.

上一篇:Privacy Amplification by Subsampling: Tight Analyses via Couplings and Divergences

下一篇:Virtual Class Enhanced Discriminative Embedding Learning

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...