Abstract
In recent years, memory-augmented neural networks(MANNs) have shown promising power to
enhance the memory ability of neural networks for
sequential processing tasks. However, previous
MANNs suffer from complex memory addressing
mechanism, making them relatively hard to train
and causing computational overheads. Moreover,
many of them reuse the classical RNN structure
such as LSTM for memory processing, causing
inefficient exploitations of memory information.
In this paper, we introduce a novel MANN, the
Auto-addressing and Recurrent Memory Integrating Network (ARMIN) to address these issues. The
ARMIN only utilizes hidden state ht for automatic
memory addressing, and uses a novel RNN cell for
refined integration of memory information. Empirical results on a variety of experiments demonstrate
that the ARMIN is more light-weight and efficient
compared to existing memory networks. Moreover,
we demonstrate that the ARMIN can achieve much
lower computational overhead than vanilla LSTM
while keeping similar performances. Codes are
available on github.com/zoharli/armin