资源论文Teaching Compositionality to CNNs

Teaching Compositionality to CNNs

2019-12-04 | |  31 |   28 |   0
Abstract Convolutional neural networks (CNNs) have shown great success in computer vision, approaching human-level performance when trained for specific tasks via applicationspecific loss functions. In this paper, we propose a method for augmenting and training CNNs so that their learned features are compositional. It encourages networks to form representations that disentangle objects from their surroundings and from each other, thereby promoting better generalization. Our method is agnostic to the specific details of the underlying CNN to which it is applied and can in principle be used with any CNN. As we show in our experiments, the learned representations lead to feature activations that are more localized and improve performance over non-compositional baselines in object recognition tasks.

上一篇:Synthesizing Normalized Faces from Facial Identity Features

下一篇:Template Matching with Deformable Diversity Similarity

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...