Abstract
In this paper we study the problem of ob ject detection for RGB-D images using semantically rich image and depth features. We pro- pose a new geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the hor- izontal disparity. We demonstrate that this geocentric embedding works better than using raw depth images for learning feature representations with convolutional neural networks. Our final ob ject detection system achieves an average precision of 37.3%, which is a 56% relative improvement over existing methods. We then focus on the task of instance segmentation where we label pixels belonging to ob ject instances found by our detector. For this task, we propose a decision forest approach that classifies pixels in the detection window as foreground or background using a family of unary and binary tests that query shape and geocentric pose features. Finally, we use the output from our ob ject detectors in an existing superpixel classifi- cation framework for semantic scene segmentation and achieve a 24% rel- ative improvement over current state-of-the-art for the ob ject categories that we study. We believe advances such as those represented in this paper will facilitate the use of perception in fields like robotics.