Abstract Video captioning which automatically translates video clips into natural language sentences is a very important task in computer vision. By virtue of recent deep learning technologies, video captioning has made great progress. However, learning an effective mapping from the visual sequence space to the language space is still a challenging problem due to the long-term multimodal dependency modelling and semantic misalignment. Inspired by the facts that memory modelling poses potential advantages to longterm sequential problems [35] and working memory is the key factor of visual attention [33], we propose a Multimodal Memory Model (M3 ) to describe videos, which builds a visual and textual shared memory to model the longterm visual-textual dependency and further guide visual attention on described visual targets to solve visual-textual alignments. Specififically, similar to [10], the proposed M3 attaches an external memory to store and retrieve both visual and textual contents by interacting with video and sentence with multiple read and write operations. To evaluate the proposed model, we perform experiments on two public datasets: MSVD and MSR-VTT. The experimental results demonstrate that our method outperforms most of the stateof-the-art methods in terms of BLEU and METEOR