Abstract
Watching a 360?
sports video requires a viewer to continuously select a viewing angle, either through a sequence
of mouse clicks or head movements. To relieve the viewer
from this “360 piloting” task, we propose “deep 360 pilot”
– a deep learning-based agent for piloting through 360?
sports videos automatically. At each frame, the agent observes a panoramic image and has the knowledge of previously selected viewing angles. The task of the agent is
to shift the current viewing angle (i.e. action) to the next
preferred one (i.e., goal). We propose to directly learn an
online policy of the agent from data. Specifically, we leverage a state-of-the-art object detector to propose a few candidate objects of interest (yellow boxes in Fig. 1). Then, a
recurrent neural network is used to select the main object
(green dash boxes in Fig. 1). Given the main object and
previously selected viewing angles, our method regresses a
shift in viewing angle to move to the next one. We use the
policy gradient technique to jointly train our pipeline, by
minimizing: (1) a regression loss measuring the distance
between the selected and ground truth viewing angles, (2) a
smoothness loss encouraging smooth transition in viewing
angle, and (3) maximizing an expected reward of focusing
on a foreground object. To evaluate our method, we built a
new 360-Sports video dataset consisting of five sports domains. We trained domain-specific agents and achieved the
best performance on viewing angle selection accuracy and
users’ preference compared to [53] and other baselines