资源论文Effective Adversarial Regularization for Neural Machine Translation

Effective Adversarial Regularization for Neural Machine Translation

2019-09-19 | |  75 |   33 |   0 0 0
Abstract A regularization technique based on adversarial perturbation, which was initially developed in the field of image processing, has been successfully applied to text classification tasks and has yielded attractive improvements. We aim to further leverage this promising methodology into more sophisticated and critical neural models in the natural language processing field, i.e., neural machine translation (NMT) models. However, it is not trivial to apply this methodology to such models. Thus, this paper investigates the effectiveness of several possible configurations of applying the adversarial perturbation and reveals that the adversarial regularization technique can significantly and consistently improve the performance of widely used NMT models, such as LSTMbased and Transformer-based models

上一篇:Domain Adaptation of Neural Machine Translation by Lexicon Induction

下一篇:Exploring Phoneme-Level Speech Representations for End-to-End Speech Translation

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...