STACL: Simultaneous Translation with Implicit Anticipation and
Controllable Latency using Prefix-to-Prefix Framework ?
Abstract
Simultaneous translation, which translates
sentences before they are finished, is useful in many scenarios but is notoriously dif-
ficult due to word-order differences. While
the conventional seq-to-seq framework is only
suitable for full-sentence translation, we propose a novel prefix-to-prefix framework for simultaneous translation that implicitly learns
to anticipate in a single translation model.
Within this framework, we present a very simple yet surprisingly effective “wait-k” policy
trained to generate the target sentence concurrently with the source sentence, but always k
words behind. Experiments show our strategy achieves low latency and reasonable quality (compared to full-sentence translation) on
4 directions: zh?en and de?en