资源论文Asymmetric Tri-training for Unsupervised Domain Adaptation

Asymmetric Tri-training for Unsupervised Domain Adaptation

2020-03-09 | |  77 |   42 |   0

Abstract

It is important to apply models trained on a large number of labeled samples to different domains because collecting many labeled samples in various domains is expensive. To learn discriminative representations for the target domain, we assume that artificially labeling the target samples can result in a good representation. Tritraining leverages three classifiers equally to pro vide pseudo-labels to unlabeled samples; however, the method does not assume labeling samples generated from a different domain. In this paper, we propose the use of an asymmetric tritraining method for unsupervised domain adaptation, where we assign pseudo-labels to unlabeled samples and train the neural networks as if they are true labels. In our work, we use three networks asymmetrically, and by asymmetric, we mean that two networks are used to label unlabeled target samples, and one network is trained by the pseudo-labeled samples to obtain target-discriminative representations. Our proposed method was shown to achieve a stateof-the-art performance on the benchmark digit recognition datasets for domain adaptation.

上一篇:The Statistical Recurrent Unit

下一篇:Simultaneous Learning of Trees and Representations for Extreme Classification and Density Estimation

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...