资源论文Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars

Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars

2019-10-21 | |  102 |   36 |   0

Abstract Event cameras are bio-inspired vision sensors that naturally capture the dynamics of a scene, fifiltering out redundant information. This paper presents a deep neural network approach that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle’s steering angle. To make the best out of this sensor–algorithm combination, we adapt state-of-the-art convolutional architectures to the output of event sensors and extensively evaluate the performance of our approach on a publicly available large scale event-camera dataset (1000 km). We present qualitative and quantitative explanations of why event cameras allow robust steering prediction even in cases where traditional cameras fail, e.g. challenging illumination conditions and fast motion. Finally, we demonstrate the advantages of leveraging transfer learning from traditional to event-based vision, and show that our approach outperforms state-of-the-art algorithms based on standard cameras

上一篇:Real-Time Seamless Single Shot 6D Object Pose Prediction

下一篇:LiDAR-Video Driving Dataset: Learning Driving Policies Effectively

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...