资源论文Learning Compressible 360°Video Isomers

Learning Compressible 360°Video Isomers

2019-10-15 | |  65 |   37 |   0

Abstract Standard video encoders developed for conventional narrow field-of-view video are widely applied to 360? video as well, with reasonable results. However, while this approach commits arbitrarily to a projection of the spherical frames, we observe that some orientations of a 360? video, once projected, are more compressible than others. We introduce an approach to predict the sphere rotation that will yield the maximal compression rate. Given video clips in their original encoding, a convolutional neural network learns the association between a clip’s visual content and its compressibility at different rotations of a cubemap projection. Given a novel video, our learning-based approach efficiently infers the most compressible direction in one shot, without repeated rendering and compression of the source video. We validate our idea on thousands of video clips and multiple popular video codecs. The results show that this untapped dimension of 360? compression has substantial potential—“good” rotations are typically 88 10% more compressible than bad ones, and our learning approach can predict them reliably 82% of the time

上一篇:Jointly Localizing and Describing Events for Dense Video Captioning

下一篇:Learning Latent Super-Events to Detect Multiple Activities in Videos

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...