Abstract
Most of the existing approaches for RGB-D indoor scene la- beling employ hand-crafted features for each modality independently and combine them in a heuristic manner. There has been some attempt on di- rectly learning features from raw RGB-D data, but the performance is not satisfactory. In this paper, we adapt the unsupervised feature learning technique for RGB-D labeling as a multi-modality learning problem. Our learning framework performs feature learning and feature encoding simul- taneously which significantly boosts the performance. By stacking basic learning structure, higher-level features are derived and combined with lower-level features for better representing RGB-D data. Experimental re- sults on the benchmark NYU depth dataset show that our method achieves competitive performance, compared with state-of-the-art.