资源论文SEMANTICALLY-G UIDED REPRESENTATION LEARN -ING FOR SELF -S UPERVISED MONOCULAR DEPTH

SEMANTICALLY-G UIDED REPRESENTATION LEARN -ING FOR SELF -S UPERVISED MONOCULAR DEPTH

2020-01-02 | |  63 |   37 |   0

Abstract

Self-supervised learning is showing great promise for monocular depth estimation, using geometry as the only source of supervision. Depth networks are indeed capable of learning representations that relate visual appearance to 3D properties by implicitly leveraging category-level patterns. In this work we investigate how to leverage more directly this semantic structure to guide geometric representation learning, while remaining in the self-supervised regime. Instead of using semantic labels and proxy losses in a multi-task approach, we propose a new architecture leveraging fixed pretrained semantic segmentation networks to guide self-supervised representation learning via pixel-adaptive convolutions. Furthermore, we propose a two-stage training process to overcome a common semantic bias on dynamic objects via resampling. Our method improves upon the state of the art for self-supervised monocular depth prediction over all pixels, fine-grained details, and per semantic categories.

上一篇:NAS EVALUATION IS FRUSTRATINGLY HARD

下一篇:LEARNING SELF -C ORRECTABLE POLICIES ANDVALUE FUNCTIONS FROM DEMONSTRATIONS WITHN EGATIVE SAMPLING

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...