资源论文Interpretable Neural Predictions with Differentiable Binary Variables

Interpretable Neural Predictions with Differentiable Binary Variables

2019-09-18 | |  190 |   116 |   0 0 0
Abstract The success of neural networks comes hand in hand with a desire for more interpretability. We focus on text classifiers and make them more interpretable by having them provide a justification—a rationale—for their predictions. We approach this problem by jointly training two neural network models: a latent model that selects a rationale (i.e. a short and informative part of the input text), and a classifier that learns from the words in the rationale alone. Previous work proposed to assign binary latent masks to input positions and to promote short selections via sparsityinducing penalties such as L0 regularisation. We propose a latent model that mixes discrete and continuous behaviour allowing at the same time for binary selections and gradient-based training without REINFORCE. In our formulation, we can tractably compute the expected value of penalties such as L0, which allows us to directly optimise the model towards a prespecified text selection rate. We show that our approach is competitive with previous work on rationale extraction, and explore further uses in attention mechanisms.

上一篇:Head-Driven Phrase Structure Grammar Parsing on Penn Treebank

下一篇:Jointly Learning Semantic Parser and Natural Language Generatorvia Dual Information Maximization

用户评价
全部评价

热门资源

  • Deep Cross-media ...

    Cross-media retrieval is a research hotspot in ...

  • Regularizing RNNs...

    Recently, caption generation with an encoder-de...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...

  • Visual Reinforcem...

    For an autonomous agent to fulfill a wide range...