资源论文Sliced Wasserstein Discrepancy for Unsupervised Domain Adaptation

Sliced Wasserstein Discrepancy for Unsupervised Domain Adaptation

2019-09-27 | |  155 |   54 |   0

Abstract In this work, we connect two distinct concepts for unsupervised domain adaptation: feature distribution alignment between domains by utilizing the task-specifific decision boundary [57] and the Wasserstein metric [72]. Our proposed sliced Wasserstein discrepancy (SWD) is designed to capture the natural notion of dissimilarity between the outputs of task-specifific classififiers. It provides a geometrically meaningful guidance to detect target samples that are far from the support of the source and enables effificient distribution alignment in an end-to-end trainable fashion. In the experiments, we validate the effectiveness and genericness of our method on digit and sign recognition, image classififi- cation, semantic segmentation, and object detection

上一篇:Refine and Distill: Exploiting Cycle-Inconsistency and Knowledge Distillation for Unsupervised Monocular Depth Estimation

下一篇:Transferrable Prototypical Networks for Unsupervised Domain Adaptation

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...