资源论文PanoContext: A Whole-Room 3D Context Model for Panoramic Scene Understanding

PanoContext: A Whole-Room 3D Context Model for Panoramic Scene Understanding

2020-04-06 | |  100 |   51 |   0

Abstract

The field-of-view of standard cameras is very small, which is one of the main reasons that contextual information is not as useful as it should be for ob- ject detection. To overcome this limitation, we advocate the use of 360? full-view panoramas in scene understanding, and propose a whole-room context model in 3D. For an input panorama, our method outputs 3D bounding boxes of the room and all major objects inside, together with their semantic categories. Our method generates 3D hypotheses based on contextual constraints and ranks the hypothe- ses holistically, combining both bottom-up and top-down context information. To train our model, we construct an annotated panorama dataset and reconstruct the 3D model from single-view using manual annotation. Experiments show that solely based on 3D context without any image region category classi fier, we can achieve a comparable performance with the state-of-the-art object detector. This demonstrates that when the FOV is large, context is as powerful as object appear- ance. All data and source code are available online.

上一篇:On Mean Pose and Variability of 3D Deformable Models

下一篇:Image Retrieval and Ranking via Consistently Reconstructing Multi-attribute Queries

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...