Abstract
Aspect term extraction (ATE) aims at identifying all aspect terms in a sentence and is
usually modeled as a sequence labeling problem. However, sequence labeling based methods cannot make full use of the overall meaning of the whole sentence and have the limitation in processing dependencies between
labels. To tackle these problems, we first
explore to formalize ATE as a sequence-tosequence (Seq2Seq) learning task where the
source sequence and target sequence are composed of words and labels respectively. At the
same time, to make Seq2Seq learning suit to
ATE where labels correspond to words one by
one, we design the gated unit networks to incorporate corresponding word representation
into the decoder, and position-aware attention
to pay more attention to the adjacent words of
a target word. The experimental results on two
datasets show that Seq2Seq learning is effective in ATE accompanied with our proposed
gated unit networks and position-aware attention mechanism.