Abstract
Large-scale pretrained language models define
state of the art in natural language processing, achieving outstanding performance on a
variety of tasks. We study how these architectures can be applied and adapted for natural language generation, comparing a number
of architectural and training schemes. We focus in particular on open-domain dialog as a
typical high entropy generation task, presenting and comparing different architectures for
adapting pretrained models with state of the art
results.