Abstract
We consider the task of identifying human actions visible in online videos. We focus on the
widely spread genre of lifestyle vlogs, which
consist of videos of people performing actions
while verbally describing them. Our goal is to
identify if actions mentioned in the speech description of a video are visually present. We
construct a dataset with crowdsourced manual
annotations of visible actions, and introduce a
multimodal algorithm that leverages information derived from visual and linguistic clues to
automatically infer which actions are visible in
a video. We demonstrate that our multimodal
algorithm outperforms algorithms based only
on one modality at a time