资源论文Unsupervised Multi-modal Neural Machine Translation

Unsupervised Multi-modal Neural Machine Translation

2019-09-27 | |  96 |   63 |   0

Abstract Unsupervised neural machine translation (UNMT) has recently achieved remarkable results [20] with only large monolingual corpora in each language. However, the uncertainty of associating target with source sentences makes UNMT theoretically an ill-posed problem. This work investigates the possibility of utilizing images for disambiguation to improve the performance of UNMT. Our assumption is intuitively based on the invariant property of image, i.e., the description of the same visual content by different languages should be approximately similar. We propose an unsupervised multi-modal machine translation (UMNMT) framework based on the language translation cycle consistency loss conditional on the image, targeting to learn the bidirectional multi-modal translation simultaneously. Through an alternate training between multi-modal and uni-modal, our inference model can translate with or without the image. On the widely used Multi30K dataset, the experimental results of our approach are signifificantly better than those of the text-only UNMT on the 2016 test dataset

上一篇:Towards Scene Understanding: Unsupervised Monocular Depth Estimation with Semantic-aware Representation

下一篇:Unsupervised Person Image Generation with Semantic Parsing Transformation

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...