资源论文Pre-Learning Environment Representations for Data-Efficient Neural Instruction Following

Pre-Learning Environment Representations for Data-Efficient Neural Instruction Following

2019-09-19 | |  93 |   49 |   0 0 0
Abstract We consider the problem of learning to map from natural language instructions to state transitions (actions) in a data-efficient manner. Our method takes inspiration from the idea that it should be easier to ground language to concepts that have already been formed through pre-linguistic observation. We augment a baseline instruction-following learner with an initial environment-learning phase that uses observations of language-free state transitions to induce a suitable latent representation of actions before processing the instruction-following training data. We show that mapping to pre-learned representations substantially improves performance over systems whose representations are learned from limited instructional data alone.

上一篇:Neural Temporality Adaptation for Document Classification: Diachronic Word Embeddings and Domain Adaptation Models

下一篇:Relating Simple Sentence Representations in Deep Neural Networks and the Brain

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...