Abstract
For applications such as augmented reality, autonomous
driving, self-localization/camera pose estimation and scene
parsing are crucial technologies. In this paper, we propose a unified framework to tackle these two problems simultaneously. The uniqueness of our design is a sensor fusion scheme which integrates camera videos, motion
sensors (GPS/IMU), and a 3D semantic map in order to
achieve robustness and efficiency of the system. Specifi-
cally, we first have an initial coarse camera pose obtained
from consumer-grade GPS/IMU, based on which a label
map can be rendered from the 3D semantic map. Then, the
rendered label map and the RGB image are jointly fed into
a pose CNN, yielding a corrected camera pose. In addition, to incorporate temporal information, a multi-layer recurrent neural network (RNN) is further deployed improve
the pose accuracy. Finally, based on the pose from RNN,
we render a new label map, which is fed together with
the RGB image into a segment CNN which produces perpixel semantic label. In order to validate our approach, we
build a dataset with registered 3D point clouds and video
camera images. Both the point clouds and the images are
semantically-labeled. Each video frame has ground truth
pose from highly accurate motion sensors. We show that
practically, pose estimation solely relying on images like
PoseNet [25] may fail due to street view confusion, and it
is important to fuse multiple sensors. Finally, various ablation studies are performed, which demonstrate the effectiveness of the proposed system. In particular, we show that
scene parsing and pose estimation are mutually beneficial
to achieve a more robust and accurate system.