Incremental Transformer with Deliberation Decoder
for Document Grounded Conversations
Abstract
Document Grounded Conversations is a task
to generate dialogue responses when chatting
about the content of a given document. Obviously, document knowledge plays a critical
role in Document Grounded Conversations,
while existing dialogue models do not exploit
this kind of knowledge effectively enough. In
this paper, we propose a novel Transformerbased architecture for multi-turn document
grounded conversations. In particular, we devise an Incremental Transformer to encode
multi-turn utterances along with knowledge
in related documents. Motivated by the human cognitive process, we design a two-pass
decoder (Deliberation Decoder) to improve
context coherence and knowledge correctness.
Our empirical study on a real-world Document
Grounded Dataset proves that responses generated by our model significantly outperform
competitive baselines on both context coherence and knowledge relevance