资源论文NEURAL MACHINE TRANSLATION WITH UNIVERSAL VISUAL REPRESENTATION

NEURAL MACHINE TRANSLATION WITH UNIVERSAL VISUAL REPRESENTATION

2019-12-30 | |  83 |   52 |   0

ABSTRACT
Though visual information has been introduced for enhancing neural machine translation(NMT),its effectiveness strongly relies on the availability of large amounts of bilingual parallel sentence pairs with manual image annotations.In this paper,we present a universal visual representation leamed over the monolingual corpora with image annotations,which overcomes the lack of large-scale bilingual sentence-image pairs,thereby extending image applicability in NMT.In detail,a group of images with similar topics to the source sentence will be retrieved from a light topic-image lookup table leamed over the existing sentence-image pairs,and then is encoded as image representations by a pre-trained ResNet.An attention layer with a gated weighting is to fuse the visual information and text information as input to the decoder for predicting target translations.In particular,the proposed method enables the visual information to be integrated into large-scale text-only NMT in addition to the multimodel NMT.Experiments on four widely used translation datasets,including the WMT 16 English-to-Romanian,WMT'14 English-to-Geman,WMT'14 English-to-French,and Multi30K,show that the proposed approach achieves significant improvements over strong baselines.


上一篇:SemiContour:A Semi-supervised Learning Approach for Contour Deectiian

下一篇:INCORPORATING BERT INTO NEURAL MACHINET RANSLATION

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...