Abstract
This paper tackles the problem of disentangling the latent representations of style and
content in language models. We propose a
simple yet effective approach, which incorporates auxiliary multi-task and adversarial objectives, for style prediction and bag-of-words
prediction, respectively. We show, both qualitatively and quantitatively, that the style and
content are indeed disentangled in the latent
space. This disentangled latent representation
learning can be applied to style transfer on
non-parallel corpora. We achieve high performance in terms of transfer accuracy, content
preservation, and language fluency, in comparison to various previous approaches