资源论文Ob ject Detection from Large-Scale 3D Datasets Using Bottom-Up and Top-Down Descriptors

Ob ject Detection from Large-Scale 3D Datasets Using Bottom-Up and Top-Down Descriptors

2020-03-30 | |  44 |   34 |   0

Abstract

We propose an approach for detecting ob jects in large-scale range datasets that combines bottom-up and top-down processes. In the bottom-up stage, fast-to-compute local descriptors are used to detect potential target ob jects. The ob ject hypotheses are verified after align- ment in a top-down stage using global descriptors that capture larger scale structure information. We have found that the combination of spin images and Extended Gaussian Images, as local and global descriptors respectively, provides a good trade-off between efficiency and accuracy. We present results on real outdoors scenes containing millions of scanned points and hundreds of targets. Our results compare favorably to the state of the art by being applicable to much larger scenes captured un- der less controlled conditions, by being able to detect ob ject classes and not specific instances, and by being able to align the query with the best matching model accurately, thus obtaining precise segmentation.

上一篇:Keypoint Signatures for Fast Learning and Recognition*

下一篇:Local Regularization for Multiclass Classification Facing Significant Intraclass Variations

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...