资源论文TRAINING RECURRENT NEURAL NETWORKS ONLINEBY LEARNING EXPLICIT STATE VARIABLES

TRAINING RECURRENT NEURAL NETWORKS ONLINEBY LEARNING EXPLICIT STATE VARIABLES

2019-12-30 | |  68 |   33 |   0

Abstract

Recurrent neural networks (RNNs) provide a powerful tool for online prediction in online partially observable problems. However, there are two primary issues one must overcome when training an RNN: the sensitivity of the learning algorithm’s performance to truncation length and and long training times. There are variety of strategies to improve training in RNNs, particularly with Backprop Through Time (BPTT) and by Real-Time Recurrent Learning. These strategies, however, are typically computationally expensive and focus computation on computing gradients back in time. In this work, we reformulate the RNN training objective to explicitly learn state vectors; this breaks the dependence across time and so avoids the need to estimate gradients far back in time. We show that for a fixed buffer of data, our algorithm—called Fixed Point Propagation (FPP)—is sound: it converges to a stationary point of the new objective. We investigate the empirical performance of our online FPP algorithm, particularly in terms of computation compared to truncated BPTT with varying truncation levels.

上一篇:IN SEARCH FOR ASAT- FRIENDLY BINARIZED NEU -RAL NETWORK ARCHITECTURE .

下一篇:NEURAL NETWORK BRANCHING FOR NEURALN ETWORK VERIFICATION

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...