Abstract
This paper proposes a novel method to inject
custom terminology into neural machine translation at run time. Previous works have mainly
proposed modifications to the decoding algorithm in order to constrain the output to include run-time-provided target terms. While
being effective, these constrained decoding
methods add, however, significant computational overhead to the inference step, and, as
we show in this paper, can be brittle when
tested in realistic conditions. In this paper we
approach the problem by training a neural MT
system to learn how to use custom terminology when provided with the input. Comparative experiments show that our method is not
only more effective than a state-of-the-art implementation of constrained decoding, but is
also as fast as constraint-free decoding