Abstract
Neural machine translation (NMT) often suffers from the vulnerability to noisy perturbations in the input. We propose an approach
to improving the robustness of NMT models, which consists of two parts: (1) attack
the translation model with adversarial source
examples; (2) defend the translation model
with adversarial target inputs to improve its
robustness against the adversarial source inputs. For the generation of adversarial inputs,
we propose a gradient-based method to craft
adversarial examples informed by the translation loss over the clean inputs. Experimental results on Chinese-English and EnglishGerman translation tasks demonstrate that our
approach achieves significant improvements
(2.8 and 1.6 BLEU points) over Transformer
on standard clean benchmarks as well as exhibiting higher robustness on noisy data.