资源论文DEEP BATCH ACTIVE LEARNING BYD IVERSE ,U NCERTAIN GRADIENT LOWER BOUNDS

DEEP BATCH ACTIVE LEARNING BYD IVERSE ,U NCERTAIN GRADIENT LOWER BOUNDS

2020-01-02 | |  135 |   44 |   0

Abstract

We design a new algorithm for batch active learning with deep neural network models. Our algorithm, Batch Active learning by Diverse Gradient Embeddings (BADGE), samples groups of points that are disparate and high-magnitude when represented in a hallucinated gradient space, a strategy designed to incorporate both predictive uncertainty and sample diversity into every selected batch. Crucially, BADGE trades off between diversity and uncertainty without requiring any hand-tuned hyperparameters. While other approaches sometimes succeed for particular batch sizes or architectures, BADGE consistently performs as well or better, making it a useful option for real world active learning problems.

上一篇:ARE TRANSFORMERS UNIVERSAL APPROXIMATORSOF SEQUENCE -TO -SEQUENCE FUNCTIONS

下一篇:LAZY-CFR: FAST AND NEAR -OPTIMAL REGRET MIN -IMIZATION FOR EXTENSIVE GAMES WITH IMPERFECT INFORMATION

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...