资源论文LEARNING ROBUST REPRESENTATIONS VIA MULTI -V IEW INFORMATION BOTTLENECK

LEARNING ROBUST REPRESENTATIONS VIA MULTI -V IEW INFORMATION BOTTLENECK

2020-01-02 | |  74 |   44 |   0

Abstract

The information bottleneck method of Tishby et al. (2000) provides an information-theoretic method for representation learning, by training an encoder to retain all information which is relevant for predicting the label, while minimizing the amount of other, superfluous information in the representation. The original formulation, however, requires labeled data in order to identify which information is superfluous. In this work, we extend this ability to the multi-view unsupervised setting, in which two views of the same underlying entity are provided but the label in unknown. This enables us to identify superfluous information as that which is not shared by both views. A theoretical analysis leads to the definition of a new multi-view model which produces state-of-the-art results on the Sketchy dataset and on label-limited versions of the MIR-Flickr dataset. We also extend our theory to the single-view setting by taking advantage of standard data augmentation techniques, empirically showing better generalization capabilities when compared to traditional unsupervised approaches for representation learning.

上一篇:END TO END TRAINABLE ACTIVE CONTOURS VIAD IFFERENTIABLE RENDERING

下一篇:STATE ALIGNMENT- BASED IMITATION LEARNING

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...