资源论文DeMoN: Depth and Motion Network for Learning Monocular Stereo

DeMoN: Depth and Motion Network for Learning Monocular Stereo

2019-12-06 | |  94 |   49 |   0
Abstract In this paper we formulate structure from motion as a learning problem. We train a convolutional network endto-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional twoframe structure from motion methods, results are more accurate and more robust. In contrast to the popular depthfrom-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training.

上一篇:DeepPermNet: Visual Permutation Learning

下一篇:Discover and Learn New Objects from Documentaries

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...