资源论文ASSEMBLE NET: SEARCHING FOR MULTI -S TREAMN EURAL CONNECTIVITY IN VIDEO ARCHITECTURES

ASSEMBLE NET: SEARCHING FOR MULTI -S TREAMN EURAL CONNECTIVITY IN VIDEO ARCHITECTURES

2019-12-30 | |  106 |   62 |   0

Abstract
Learning to represent videos is a very challenging task both algorithmically and computationally. Standard video CNN architectures have been designed by directly extending architectures devised for image understanding to include the time dimension, using modules such as 3D convolutions, or by using two-stream design to capture both appearance and motion in videos. We interpret a video CNN as a collection of multi-stream convolutional blocks connected to each other, and propose the approach of automatically finding neural architectures with better connectivity and spatio-temporal interactions for video understanding. This is done by evolving a population of overly-connected architectures guided by connection weight learning. Architectures combining representations that abstract different input types (i.e., RGB and optical flow) at multiple temporal resolutions are searched for, allowing different types or sources of information to interact with each other. Our method, referred to as AssembleNet, outperforms prior approaches on public video datasets, in some cases by a great margin.

上一篇:DEEP V2D: VIDEO TO DEPTH WITH DIFFERENTIABLES TRUCTURE FROM MOTION

下一篇:VARIATIONAL HETERO -E NCODER RANDOMIZEDGAN SFOR JOINT IMAGE -T EXT MODELING

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...