资源论文TOWARDS NEURAL NETWORKS THAT PROVABLY KNOWWHEN THEY DON ’T KNOW

TOWARDS NEURAL NETWORKS THAT PROVABLY KNOWWHEN THEY DON ’T KNOW

2019-12-30 | |  73 |   43 |   0

Abstract

It has recently been shown that ReLU networks produce arbitrarily over-confident predictions far away from the training data. Thus, ReLU networks do not know when they don’t know. However, this is a highly important property in safety critical applications. In the context of out-of-distribution detection (OOD) there have been a number of proposals to mitigate this problem but none of them are able to make any mathematical guarantees. In this paper we propose a new approach to OOD which overcomes both problems. Our approach can be used with ReLU networks and provides provably low confidence predictions far away from the training data as well as the first certificates for low confidence predictions in a neighborhood of an out-distribution point. In the experiments we show that stateof-the-art methods fail in this worst-case setting whereas our model can guarantee its performance while retaining state-of-the-art OOD performance.

上一篇:Symplectic Recurrent Neural Networks

下一篇:BEYOND LINEARIZATION :O NQ UADRATIC ANDH IGHER -O RDER APPROXIMATION OF WIDE NEURAL N ETWORKS

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...