资源论文Single-Image Depth Perception in the Wild

Single-Image Depth Perception in the Wild

2020-02-07 | |  72 |   48 |   0

Abstract 

This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset “Depth in the Wild” consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild. RGB-D Data Relative Depth Annotations train Deep Network with Input Image Metric Depth Pixel-wise PredictionFigure 1: We crowdsource annotations of relative depth and train a deep network to recover depthfrom a single image taken in unconstrained settings (“in the wild”).

上一篇:Globally Optimal Training of Generalized Polynomial Neural Networks with Nonlinear Spectral Methods

下一篇:Automated scalable segmentation of neurons from multispectral images

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...