资源论文Cross Modal Distillation for Supervision Transfer

Cross Modal Distillation for Supervision Transfer

2019-12-20 | |  109 |   53 |   0

Abstract

In this work we propose a technique that transfers su-pervision between images from different modalities. We uselearned representations from a large labeled modality as supervisory signal for training representations for a new unlabeled paired modality. Our method enables learning of rich representations for unlabeled modalities and can beused as a pre-training procedure for new modalities with limited labeled data. We transfer supervision from labeled RGB images to unlabeled depth and optical flow images and demonstrate large improvements for both these cross modal supervision transfers.

上一篇:Active Learning for Delineation of Curvilinear Structures

下一篇:Globally Optimal Manhattan Frame Estimation in Real-time

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...