Deeper Connections between Neural Networks and Gaussian Processes Speed-up
Active Learning
Abstract
Active learning methods for neural networks are
usually based on greedy criteria, which ultimately
give a single new design point for the evaluation.
Such an approach requires either some heuristics
to sample a batch of design points at one active
learning iteration, or retraining the neural network
after adding each data point, which is computationally inefficient. Moreover, uncertainty estimates for
neural networks sometimes are overconfident for
the points lying far from the training sample. In this
work, we propose to approximate Bayesian neural networks (BNN) by Gaussian processes (GP),
which allows us to update the uncertainty estimates
of predictions efficiently without retraining the neural network while avoiding overconfident uncertainty prediction for out-of-sample points. In a
series of experiments on real-world data, including large-scale problems of chemical and physical
modeling, we show the superiority of the proposed
approach over the state-of-the-art methods