Abstract
Most existing neural models for math word problems exploit Seq2Seq model to generate solution
expressions sequentially from left to right, whose
results are far from satisfactory due to the lack
of goal-driven mechanism commonly seen in human problem solving. This paper proposes a treestructured neural model to generate expression tree
in a goal-driven manner. Given a math word problem, the model first identifies and encodes its goal
to achieve, and then the goal gets decomposed into
sub-goals combined by an operator in a top-down
recursive way. The whole process is repeated until the goal is simple enough to be realized by a
known quantity as leaf node. During the process,
two-layer gated-feedforward networks are designed
to implement each step of goal decomposition, and
a recursive neural network is used to encode ful-
filled subtrees into subtree embeddings, which provides a better representation of subtrees than the
simple goals of subtrees. Experimental results on
the dataset Math23K have shown that our treestructured model outperforms significantly several
state-of-the-art models