资源论文Bridging the Gap between Training and Inference for Neural Machine Translation

Bridging the Gap between Training and Inference for Neural Machine Translation

2019-09-19 | |  167 |   117 |   0 0 0
Abstract Neural Machine Translation (NMT) generates target words sequentially in the way of predicting the next word conditioned on the context words. At training time, it predicts with the ground truth words as context while at inference it has to generate the entire sequence from scratch. This discrepancy of the fed context leads to error accumulation among the way. Furthermore, word-level training requires strict matching between the generated sequence and the ground truth sequence which leads to overcorrection over different but reasonable translations. In this paper, we address these issues by sampling context words not only from the ground truth sequence but also from the predicted sequence by the model during training, where the predicted sequence is selected with a sentence-level optimum. Experiment results on Chinese?English and WMT’14 English?German translation tasks demonstrate that our approach can achieve significant improvements on multiple datasets

上一篇:Better OOV Translation with Bilingual Terminology Mining

下一篇:Depth Growing for Neural Machine Translation

用户评价
全部评价

热门资源

  • Deep Cross-media ...

    Cross-media retrieval is a research hotspot in ...

  • Regularizing RNNs...

    Recently, caption generation with an encoder-de...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Supervised Descen...

    Many computer vision problems (e.

  • Learning Expressi...

    Facial expression is temporally dynamic event w...