Abstract
Neural semantic parsers utilize the encoderdecoder framework to learn an end-to-end
model for semantic parsing that transduces a
natural language sentence to the formal semantic representation. To keep the model
aware of the underlying grammar in target sequences, many constrained decoders were devised in a multi-stage paradigm, which decode
to the sketches or abstract syntax trees first,
and then decode to target semantic tokens.
We instead to propose an adaptive decoding
method to avoid such intermediate representations. The decoder is guided by model uncertainty and automatically uses deeper computations when necessary. Thus it can predict tokens adaptively. Our model outperforms
the state-of-the-art neural models and does not
need any expertise like predefined grammar or
sketches in the meantime.