Abstract
The encoder-decoder framework has achieved
promising process for many sequence generation
tasks, such as neural machine translation and text
summarization. Such a framework usually generates a sequence token by token from left to right,
hence (1) this autoregressive decoding procedure is
time-consuming when the output sentence becomes
longer, and (2) it lacks the guidance of future context which is crucial to avoid under-translation. To
alleviate these issues, we propose a synchronous
bidirectional sequence generation (SBSG) model
which predicts its outputs from both sides to the
middle simultaneously. In the SBSG model, we enable the left-to-right (L2R) and right-to-left (R2L)
generation to help and interact with each other by
leveraging interactive bidirectional attention network. Experiments on neural machine translation
(En?De, Ch?En, and En?Ro) and text summarization tasks show that the proposed model significantly speeds up decoding while improving the
generation quality compared to the autoregressive
Transformer