资源论文High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach

High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach

2020-03-19 | |  46 |   50 |   0

Abstract

This paper considers the generation of prediction intervals (PIs) by neural networks for quantifying uncertainty in regression tasks. It is axiomatic t high-quality PIs should be as narrow as possible, whilst capturing a specified portion of data. We derive a loss function directly from this axiom th requires no distributional assumption. We show how its form derives from a likelihood principle, that it can be used with gradient descent, and tha model uncertainty is accounted for in ensembled form. Benchmark experiments show the method outperforms current state-of-the-art uncertainty quantification methods, reducing average PI width by over 10%.

上一篇:Differentiable Compositional Kernel Learning for Gaussian Processes

下一篇:Composite Functional Gradient Learning of Generative Adversarial Models

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...