资源论文ON THE RELATIONSHIP BETWEEN SELF -ATTENTIONAND CONVOLUTIONAL LAYERS

ON THE RELATIONSHIP BETWEEN SELF -ATTENTIONAND CONVOLUTIONAL LAYERS

2020-01-02 | |  60 |   57 |   0

Abstract

Recent trends of incorporating attention mechanisms in vision have led researchers to reconsider the supremacy of convolutional layers as a primary building block. Beyond helping CNNs to handle long-range dependencies, Ramachandran et al. (2019) showed that attention can completely replace convolution and achieve state-of-the-art performance on vision tasks. This raises the question: do learned attention layers operate similarly to convolutional layers? This work provides evidence that attention layers can perform convolution and, indeed, they often learn to do so in practice. Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer. Our numerical experiments then show that the phenomenon also occurs in practice, corroborating our analysis. Our code is publicly available1 .

上一篇:CAN GRADIENT CLIPPING MITIGATE LABEL NOISE ?

下一篇:IMPROVING ADVERSARIAL ROBUSTNESS REQUIRESR EVISITING MISCLASSIFIED EXAMPLES

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...