资源论文Depth and surface normal estimation from monocular images using regression on deep features and hierarchical CRFs

Depth and surface normal estimation from monocular images using regression on deep features and hierarchical CRFs

2019-12-19 | |  61 |   53 |   0

Abstract

Predicting the depth (or surface normal) of a scene from single monocular color images is a challenging task. This paper tackles this challenging and essentially underdetermined problem by regression on deep convolutional neural network (DCNN) features, combined with a postprocessing refifining step using conditional random fifields (CRF). Our framework works at two levels, super-pixel level and pixel level. First, we design a DCNN model to learn the mapping from multi-scale image patches to depth or surface normal values at the super-pixel level. Second, the estimated super-pixel depth or surface normal is re- fifined to the pixel level by exploiting various potentials on the depth or surface normal map, which includes a data term, a smoothness term among super-pixels and an autoregression term characterizing the local structure of the estimation map. The inference problem can be effificiently solved because it admits a closed-form solution. Experiments on the Make3D and NYU Depth V2 datasets show competitive results compared with recent state-of-the-art methods.

上一篇:Early burst detection for memory-efficient image retrieval

下一篇:Learning graph structure for multi-label image classification via clique generation

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...