资源论文Mobile Video Object Detection with Temporally-Aware Feature Maps

Mobile Video Object Detection with Temporally-Aware Feature Maps

2019-10-16 | |  65 |   44 |   0

Abstract This paper introduces an online model for object detection in videos designed to run in real-time on lowpowered mobile and embedded devices. Our approach combines fast single-image object detection with convolutional long short term memory (LSTM) layers to create an interweaved recurrent-convolutional architecture. Additionally, we propose an effificient Bottleneck-LSTM layer that signifificantly reduces computational cost compared to regular LSTMs. Our network achieves temporal awareness by using Bottleneck-LSTMs to refifine and propagate feature maps across frames. This approach is substantially faster than existing detection methods in video, outperforming the fastest single-frame models in model size and computational cost while attaining accuracy comparable to much more expensive single-frame models on the Imagenet VID 2015 dataset. Our model reaches a real-time inference speed of up to 15 FPS on a mobile CPU.

上一篇:M3 : Multimodal Memory Modelling for Video Captioning

下一篇:MoNet: Deep Motion Exploitation for Video Object Segmentation

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...