Feature Mapping for Learning Fast and Accurate 3D Pose Inference
from Synthetic Images
Abstract
We propose a simple and efficient method for exploiting
synthetic images when training a Deep Network to predict
a 3D pose from an image. The ability of using synthetic images for training a Deep Network is extremely valuable as
it is easy to create a virtually infinite training set made of
such images, while capturing and annotating real images
can be very cumbersome. However, synthetic images do
not resemble real images exactly, and using them for training can result in suboptimal performance. It was recently
shown that for exemplar-based approaches, it is possible to
learn a mapping from the exemplar representations of real
images to the exemplar representations of synthetic images.
In this paper, we show that this approach is more general,
and that a network can also be applied after the mapping to
infer a 3D pose: At run-time, given a real image of the target object, we first compute the features for the image, map
them to the feature space of synthetic images, and finally
use the resulting features as input to another network which
predicts the 3D pose. Since this network can be trained very
effectively by using synthetic images, it performs very well
in practice, and inference is faster and more accurate than
with an exemplar-based approach. We demonstrate our approach on the LINEMOD dataset for 3D object pose estimation from color images, and the NYU dataset for 3D hand
pose estimation from depth maps. We show that it allows us
to outperform the state-of-the-art on both datasets