Keeping Notes: Conditional Natural Language Generation
with a Scratchpad Mechanism
Abstract
We introduce the Scratchpad Mechanism, a
novel addition to the sequence-to-sequence
(seq2seq) neural network architecture and
demonstrate its effectiveness in improving the
overall fluency of seq2seq models for natural
language generation tasks. By enabling the
decoder at each time step to write to all of
the encoder output layers, Scratchpad can employ the encoder as a “scratchpad” memory
to keep track of what has been generated so
far and thereby guide future generation. We
evaluate Scratchpad in the context of three
well-studied natural language generation tasks
— Machine Translation, Question Generation,
and Text Summarization — and obtain stateof-the-art or comparable performance on standard datasets for each task. Qualitative assessments in the form of human judgements
(question generation), attention visualization
(MT), and sample output (summarization) provide further evidence of the ability of Scratchpad to generate fluent and expressive output.