资源论文word2ket: SPACE -EFFICIENT WORD EMBEDDINGS IN -SPIRED BY QUANTUM ENTANGLEMENT

word2ket: SPACE -EFFICIENT WORD EMBEDDINGS IN -SPIRED BY QUANTUM ENTANGLEMENT

2020-01-02 | |  91 |   47 |   0

Abstract
Deep learning natural language processing models often use vector word embeddings, such as word2vec or GloVe, to represent words. A discrete sequence of words can be much more easily integrated with downstream neural layers if it is represented as a sequence of continuous vectors. Also, semantic relationships between words, learned from a text corpus, can be encoded in the relative configurations of the embedding vectors. However, storing and accessing embedding vectors for all words in a dictionary requires large amount of space, and may strain systems with limited GPU memory. Here, we used approaches inspired by quantum computing to propose two related methods, word2ket and word2ketXS, for storing word embedding matrix during training and inference in a highly efficient way. Our approach achieves a hundred-fold or more reduction in the space required to store the embeddings with almost no relative drop in accuracy in practical natural language processing tasks.

上一篇:VID 2G AME :C ONTROLLABLE CHARACTERS EX -TRACTED FROM REAL -W ORLD VIDEOS

下一篇:INFO GRAPH :U NSUPERVISED AND SEMI -SUPERVISEDG RAPH -L EVEL REPRESENTATION LEARNING VIA MU -TUAL INFORMATION MAXIMIZATION

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...