资源论文Accurate Uncertainties for Deep Learning Using Calibrated Regression

Accurate Uncertainties for Deep Learning Using Calibrated Regression

2020-03-19 | |  103 |   99 |   0

Abstract

Methods for reasoning under uncertainty are a key building block of accurate and reliable machine learning systems. Bayesian methods provide a general framework to quantify uncertainty. However, because of model misspecification and the use of approximate inference, Bayesian uncertainty estimates are often inaccurate — for example, a 90% credible interval may not contain the true outcome 90% of the time. Here, we propose a simple procedure for calibrating any regression algorithm; when applied to Bayesian and probabilistic models, it is guaranteed to produce calibrated uncertainty estimates given enough data. Our procedure is inspired by Platt scaling and extends previous work on classification. We evaluate this approach on Bayesian linear regression, feedforward, and recurrent neural networks, and find that it consistently output well-calibrated credible intervals while improving performance on time series forecasting and model-based reinforcement learning tasks.

上一篇:CRVI: Convex Relaxation for Variational Inference

下一篇:The Hidden Vulnerability of Distributed Learning in Byzantium

用户评价
全部评价

热门资源

  • Deep Cross-media ...

    Cross-media retrieval is a research hotspot in ...

  • Regularizing RNNs...

    Recently, caption generation with an encoder-de...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...

  • Visual Reinforcem...

    For an autonomous agent to fulfill a wide range...