资源论文Putting words in context: LSTM language models and lexical ambiguity

Putting words in context: LSTM language models and lexical ambiguity

2019-09-19 | |  71 |   43 |   0 0 0
Abstract In neural network models of language, words are commonly represented using contextinvariant representations (word embeddings) which are then put in context in the hidden layers. Since words are often ambiguous, representing the contextually relevant information is not trivial. We investigate how an LSTM language model deals with lexical ambiguity in English, designing a method to probe its hidden representations for lexical and contextual information about words. We find that both types of information are represented to a large extent, but also that there is room for improvement for contextual information.

上一篇:Fine-Grained Spoiler Detection from Large-Scale Review Corpora

下一篇:Recognising Agreement and Disagreement between Stances with Reason Comparing Networks

用户评价
全部评价

热门资源

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...