资源论文Multi-modal Unsupervised Feature Learning for RGB-D Scene Labeling

Multi-modal Unsupervised Feature Learning for RGB-D Scene Labeling

2020-04-07 | |  101 |   41 |   0

Abstract

Most of the existing approaches for RGB-D indoor scene la- beling employ hand-crafted features for each modality independently and combine them in a heuristic manner. There has been some attempt on di- rectly learning features from raw RGB-D data, but the performance is not satisfactory. In this paper, we adapt the unsupervised feature learning technique for RGB-D labeling as a multi-modality learning problem. Our learning framework performs feature learning and feature encoding simul- taneously which significantly boosts the performance. By stacking basic learning structure, higher-level features are derived and combined with lower-level features for better representing RGB-D data. Experimental re- sults on the benchmark NYU depth dataset show that our method achieves competitive performance, compared with state-of-the-art.

上一篇:A MAP-Estimation Framework for Blind Deblurring Using High-Level Edge Priors*

下一篇:Transductive Multi-view Embedding for Zero-Shot Recognition and Annotation

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...