Jointly Learning Semantic Parser and Natural Language Generatorvia Dual Information Maximization
Abstract
Semantic parsing aims to transform natural
language (NL) utterances into formal meaning
representations (MRs), whereas an NL generator achieves the reverse: producing a NL description for some given MRs. Despite this
intrinsic connection, the two tasks are often
studied separately in prior work. In this paper,
we model the duality of these two tasks via a
joint learning framework, and demonstrate its
effectiveness of boosting the performance on
both tasks. Concretely, we propose the method
of dual information maximization (DIM) to
regularize the learning process, where DIM
empirically maximizes the variational lower
bounds of expected joint distributions of NL
and MRs. We further extend DIM to a semisupervision setup (SEMIDIM), which leverages unlabeled data of both tasks. Experiments on three datasets of dialogue management and code generation (and summarization) show that performance on both semantic parsing and NL generation can be consistently improved by DIM, in both supervised
and semi-supervised setups1.