Abstract
Several theories in cognitive neuroscience suggest that
when people interact with the world, or simulate interactions, they do so from a first-person egocentric perspective,
and seamlessly transfer knowledge between third-person
(observer) and first-person (actor). Despite this, learning
such models for human action recognition has not been
achievable due to the lack of data. This paper takes a step
in this direction, with the introduction of Charades-Ego, a
large-scale dataset of paired first-person and third-person
videos, involving 112 people, with 4000 paired videos. This
enables learning the link between the two, actor and observer perspectives. Thereby, we address one of the biggest
bottlenecks facing egocentric vision research, providing a
link from first-person to the abundant third-person data on
the web. We use this data to learn a joint representation of
first and third-person videos, with only weak supervision,
and show its effectiveness for transferring knowledge from
the third-person to the first-person domain