资源论文Improving Textual Network Embedding with Global Attention via Optimal Transport

Improving Textual Network Embedding with Global Attention via Optimal Transport

2019-09-24 | |  83 |   41 |   0

Abstract Constituting highly informative network embeddings is an important tool for network analysis. It encodes network topology, along with other useful side information, into lowdimensional node-based feature representations that can be exploited by statistical modeling. This work focuses on learning contextaware network embeddings augmented with text data. We reformulate the networkembedding problem, and present two novel strategies to improve over traditional attention mechanisms: (i) a content-aware sparse attention module based on optimal transport, and (ii) a high-level attention parsing module. Our approach yields naturally sparse and self-normalized relational inference. It can capture long-term interactions between sequences, thus addressing the challenges faced by existing textual network embedding schemes. Extensive experiments are conducted to demonstrate our model can consistently outperform alternative state-of-the-art methods

上一篇:Global Textual Relation Embedding for Relational Understanding

下一篇:Incorporating Priors with Feature Attribution on Text Classification

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...