资源论文LOGIC AND THE 2-S IMPLICIAL TRANSFORMER

LOGIC AND THE 2-S IMPLICIAL TRANSFORMER

2020-01-02 | |  103 |   81 |   0

Abstract

We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors. We show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning.

上一篇:MEMORY-BASED GRAPH NETWORKS

下一篇:CAPSULES WITH INVERTED DOT-P RODUCTATTENTION ROUTING

用户评价
全部评价

热门资源

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Shape-based Autom...

    We present an algorithm for automatic detection...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...

  • Bounding the Inef...

    Social networks on the Internet have seen an en...