Abstract
The prevalent approach to sequence to sequence learning maps an input sequence to a variable leng output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurre models, computations over all elements can be full parallelized during training to better exploit the hardware and optimization is easier since the numb of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder lay with a separate attention module. We outperform th accuracy of the deep LSTM setup of Wu et al. (2016 on both WMT’14 English-German and WMT’14 English-French translation at an order of magnitud faster speed, both on GPU and CPU.*