资源论文M3 : Multimodal Memory Modelling for Video Captioning

M3 : Multimodal Memory Modelling for Video Captioning

2019-10-16 | |  71 |   34 |   0

Abstract Video captioning which automatically translates video clips into natural language sentences is a very important task in computer vision. By virtue of recent deep learning technologies, video captioning has made great progress. However, learning an effective mapping from the visual sequence space to the language space is still a challenging problem due to the long-term multimodal dependency modelling and semantic misalignment. Inspired by the facts that memory modelling poses potential advantages to longterm sequential problems [35] and working memory is the key factor of visual attention [33], we propose a Multimodal Memory Model (M3 ) to describe videos, which builds a visual and textual shared memory to model the longterm visual-textual dependency and further guide visual attention on described visual targets to solve visual-textual alignments. Specififically, similar to [10], the proposed M3 attaches an external memory to store and retrieve both visual and textual contents by interacting with video and sentence with multiple read and write operations. To evaluate the proposed model, we perform experiments on two public datasets: MSVD and MSR-VTT. The experimental results demonstrate that our method outperforms most of the stateof-the-art methods in terms of BLEU and METEOR

上一篇:LEGO: Learning Edge with Geometry all at Once by Watching Videos

下一篇:Mobile Video Object Detection with Temporally-Aware Feature Maps

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...