资源论文Deep Exemplar 2D-3D Detection by Adapting from Real to Rendered Views

Deep Exemplar 2D-3D Detection by Adapting from Real to Rendered Views

2019-12-23 | |  44 |   37 |   0

Abstract

This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural imagesto better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefits from recent advances in deep learning to 2D-3D exemplar detection. We applied our method to two tasks: instance detection, where we evaluated on the IKEA dataset [36], and object category detection, where we out-perform Aubry et al. [3] for “chair” detection on a subset of the Pascal VOC dataset.

上一篇:NTU RGB+D: A Large Scale Dataset for 3D Human Activity Analysis

下一篇:BoxCars: 3D Boxes as CNN Input for Improved Fine-Grained Vehicle Recognition

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...