Abstract
Recurrent Neural Networks (RNNs) have shown
great promise in sequence modeling tasks. Gated
Recurrent Unit (GRU) is one of the most used recurrent structures, which makes a good trade-off
between performance and time spent. However, its
practical implementation based on soft gates only
partially achieves the goal to control information
flow. We can hardly explain what the network has
learnt internally. Inspired by human reading, we introduce binary input gated recurrent unit (BIGRU),
a GRU based model using a binary input gate instead of the reset gate in GRU. By doing so, our
model can read selectively during interference. In
our experiments, we show that BIGRU mainly ignores the conjunctions, adverbs and articles that do
not make a big difference to the document understanding, which is meaningful for us to further understand how the network works. In addition, due
to reduced interference from redundant information, our model achieves better performances than
baseline GRU in all the testing tasks