资源论文CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks

CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks

2019-09-22 | |  77 |   44 |   0 0 0
Abstract Lake and Baroni (2018) introduced the SCAN dataset probing the ability of seq2seq models to capture compositional generalizations, such as inferring the meaning of “jump around” 0- shot from the component words. Recurrent networks (RNNs) were found to completely fail the most challenging generalization cases. We test here a convolutional network (CNN) on these tasks, reporting hugely improved performance with respect to RNNs. Despite the big improvement, the CNN has however not induced systematic rules, suggesting that the difference between compositional and noncompositional behaviour is not clear-cut

上一篇:Boosting Entity Linking Performance by Leveraging Unlabeled Documents

下一篇:CoDraw: Collaborative Drawing as a Testbed for Grounded Goal-driven Communication

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...