Abstract
Sequence-to-sequence (seq2seq) approaches formalize Abstract Meaning Representation (AMR)
parsing as a translation task from a source sentence
to a target AMR graph. However, previous studies generally model a source sentence as a word
sequence but ignore the inherent syntactic and semantic information in the sentence. In this paper,
we propose two effective approaches to explicitly
modeling source syntax and semantics into neural seq2seq AMR parsing.1 The first approach linearizes source syntactic and semantic structure into
a mixed sequence of words, syntactic labels, and
semantic labels, while in the second approach we
propose a syntactic and semantic structure-aware
encoding scheme through a self-attentive model to
explicitly capture syntactic and semantic relations
between words. Experimental results on an English
benchmark dataset show that our two approaches
achieve significant improvement of 3.1% and 3.4%
F1 scores over a strong seq2seq baseline