资源论文Unsupervised Event-based Learning of Optical Flow, Depth, and Egomotion

Unsupervised Event-based Learning of Optical Flow, Depth, and Egomotion

2019-09-27 | |  152 |   64 |   0

 Abstract In this work, we propose a novel framework for unsupervised learning for event cameras that learns motion information from only the event stream. In particular, we propose an input representation of the events in the form of a discretized volume that maintains the temporal distribution of the events, which we pass through a neural network to predict the motion of the events. This motion is used to attempt to remove any motion blur in the event image. We then propose a loss function applied to the motion compensated event image that measures the motion blur in this image. We train two networks with this framework, one to predict optical flflow, and one to predict egomotion and depths, and evaluate these networks on the Multi Vehicle Stereo Event Camera dataset, along with qualitative results from a variety of different scenes

上一篇:Unsupervised Disentangling of Appearance and Geometry by Deformable Generator Network

下一篇:Unsupervised Face Normalization with Extreme Pose and Expression in the Wild

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...