资源论文The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations

The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations

2019-10-08 | |  72 |   37 |   0
Abstract Post-hoc interpretability approaches have been proven to be powerful tools to generate explanations for the predictions made by a trained blackbox model. However, they create the risk of having explanations that are a result of some artifacts learned by the model instead of actual knowledge from the data. This paper focuses on the case of counterfactual explanations and asks whether the generated instances can be justified, i.e. continuously connected to some ground-truth data. We evaluate the risk of generating unjustified counterfactual examples by investigating the local neighborhoods of instances whose predictions are to be explained and show that this risk is quite high for several datasets. Furthermore, we show that most state of the art approaches do not differentiate justi- fied from unjustified counterfactual examples, leading to less useful explanations

上一篇:Single-Channel Signal Separation and Deconvolution with Generative Adversarial Networks

下一篇:What to Expect of Classifiers? Reasoning about Logistic Regression with Missing Features

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...