资源论文RE MIX MATCH :S EMI -S UPERVISED LEARNING WITHD ISTRIBUTION ALIGNMENT AND AUGMENTATIONA NCHORING

RE MIX MATCH :S EMI -S UPERVISED LEARNING WITHD ISTRIBUTION ALIGNMENT AND AUGMENTATIONA NCHORING

2020-01-02 | |  61 |   54 |   0

Abstract

We improve the recently-proposed “MixMatch” semi-supervised learning algorithm by introducing two new techniques: distribution alignment and augmentation anchoring. Distribution alignment encourages the marginal distribution of predictions on unlabeled data to be close to the marginal distribution of groundtruth labels. Augmentation anchoring feeds multiple strongly augmented versions of an input into the model and encourages each output to be close to the prediction for a weakly-augmented version of the same input. To produce strong augmentations, we propose a variant of AutoAugment which learns the augmentation policy while the model is being trained. Our new algorithm, dubbed ReMixMatch, is significantly more data-efficient than prior work, requiring between 5× and 16× less data to reach the same accuracy. For example, on CIFAR-10 with 250 labeled examples we reach 93.73% accuracy (compared to MixMatch’s accuracy of 93.58% with 4,000 examples) and a median accuracy of 84.92% with just four labels per class. We make our code and data open-source at URL blinded for peer review.

上一篇:DYNAMICAL DISTANCE LEARNING FORS EMI -S UPERVISED AND UNSUPERVISEDS KILL DISCOVERY

下一篇:RELATIONAL STATE -S PACE MODELFOR STOCHASTIC MULTI -O BJECT SYSTEMS

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...