资源论文Unsupervised Visual-Linguistic Reference Resolution in Instructional Videos

Unsupervised Visual-Linguistic Reference Resolution in Instructional Videos

2019-12-06 | |  77 |   62 |   0

Abstract

We propose an unsupervised method for reference resolution in instructional videos, where the goal is to temporally link an entity (e.g., dressing) to the action (e.g., mix yogurt) that produced it. The key challenge is the inevitable visual-linguistic ambiguities arising from the changes in both visual appearance and referring expression of an entity in the video. This challenge is amplifified by the fact that we aim to resolve references with no supervision. We address these challenges by learning a joint visuallinguistic model, where linguistic cues can help resolve visual ambiguities and vice versa. We verify our approach by learning our model unsupervisedly using more than two thousand unstructured cooking videos from YouTube, and show that our visual-linguistic model can substantially improve upon state-of-the-art linguistic only model on reference resolution in instructional videos.

上一篇:Unsupervised Video Summarization with Adversarial LSTM Networks

下一篇:A Graph Regularized Deep Neural Network for Unsupervised Image Representation Learning

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...