资源论文Getting in Shape: Word Embedding SubSpaces

Getting in Shape: Word Embedding SubSpaces

2019-10-10 | |  72 |   47 |   0
Abstract Many tasks in natural language processing require the alignment of word embeddings. Embedding alignment relies on the geometric properties of the manifold of word vectors. This paper focuses on supervised linear alignment and studies the relationship between the shape of the target embedding. We assess the performance of aligned word vectors on semantic similarity tasks and find that the isotropy of the target embedding is critical to the alignment. Furthermore, aligning with an isotropic noise can deliver satisfactory results. We provide a theoretical framework and guarantees which aid in the understanding of empirical results.

上一篇:Exploring and Distilling Cross-Modal Information for Image Captioning

下一篇:HorNet: A Hierarchical Offshoot Recurrent Network for Improving Person Re-ID via Image Captioning

用户评价
全部评价

热门资源

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...