资源论文Kronecker Recurrent Units

Kronecker Recurrent Units

2020-03-16 | |  60 |   61 |   0

Abstract

Our work addresses two important issues with recurrent neural networks: (1) they are overparametrized, and (2) the recurrent weight matrix is ill-conditioned. The former increases the sample complexity of learning and the training time. The latter causes the vanishing and exploding gradient problem. We present a flexible recurrent neural network model called Kronecker Recurrent Units (KRU). KRU achieves parameter efficiency in RNNs through a Kronecker factored recurrent matrix. It overcomes the ill-conditioning of the recurrent matrix by enforcing soft unitary constraints on the factors. Thanks to the small dimen sionality of the factors, maintaining these constraints is computationally efficient. Our experimental results on seven standard data-sets reveal that KRU can reduce the number of parameters by three orders of magnitude in the recurrent weight matrix compared to the existing recurrent models, without trading the statistical performance. These results in particular show that while there are ad vantages in having a high dimensional recurrent space, the capacity of the recurrent part of the model can be dramatically reduced.

上一篇:Graph Networks as Learnable Physics Engines for Inference and Control

下一篇:Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games?

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...