资源论文Bottom-Up and Top-Down Reasoning with Hierarchical Rectified Gaussians

Bottom-Up and Top-Down Reasoning with Hierarchical Rectified Gaussians

2019-12-20 | |  120 |   34 |   0

Abstract

Convolutional neural nets (CNNs) have demonstrated remarkable performance in recent history. Such approachestend to work in a “unidirectional” bottom-up feed-forwardfashion. However, practical experience and biological ev-idence tells us that feedback plays a crucial role, particularly for detailed spatial understanding tasks. This work explores “bidirectional” architectures that also reason with top-down feedback: neural units are influenced by both lower and higher-level units. We do so by treating units as rectified latent variablesin a quadratic energy function, which can be seen as a hi-erarchical Rectified Gaussian model (RGs) [39]. We show that RGs can be optimized with a quadratic program (QP),that can in turn be optimized with a recurrent neural network (with rectified linear units). This allows RGs to be trained with GPU-optimized gradient descent. From a theoretical perspective, RGs help establish a connection between CNNs and hierarchical probabilistic models. From a practical perspective, RGs are well suited for detailed spatial tasks that can benefit from top-down reasoning. We illustrate them on the challenging task of keypoint localization under occlusions, where local bottom-up evidence may be misleading. We demonstrate state-of-the-art results on challenging benchmarks.

上一篇:You Only Look Once: Unified, Real-Time Object Detection

下一篇:Weakly Supervised Deep Detection Networks

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...