Abstract
In this paper we formulate structure from motion as a
learning problem. We train a convolutional network endto-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the
core part being an iterative network that is able to improve
its own predictions. The network estimates not only depth
and motion, but additionally surface normals, optical flow
between the images and confidence of the matching. A crucial component of the approach is a training loss based on
spatial relative differences. Compared to traditional twoframe structure from motion methods, results are more accurate and more robust. In contrast to the popular depthfrom-single-image networks, DeMoN learns the concept of
matching and, thus, better generalizes to structures not seen
during training.