资源论文Continual Reinforcement Learning with Complex Synapses

Continual Reinforcement Learning with Complex Synapses

2020-03-16 | |  63 |   31 |   0

Abstract

Unlike humans, who are capable of continual learning over their lifetimes, artificial neural n works have long been known to suffer from a phenomenon known as catastrophic forgetting, whereby new learning can lead to abrupt erasure of previously acquired knowledge. Whereas in a neural network the parameters are typically modelled as scalar values, an individual synapse in the brain comprises a complex network of interacting biochemical components that evolve at different timescales. In this paper, we show that by equipping tabular and deep reinforcement learning agents with a synaptic model that incorporates this biological complexity (Benna & Fusi, 2016), catastrophic forgetting can be mitigated at multiple timescales. In particular, we find that as wel as enabling continual learning across sequential training of two simple tasks, it can also be used overcome within-task forgetting by reducing the need for an experience replay database.

上一篇:On the Implicit Bias of Dropout

下一篇:Mix & Match – Agent Curricula for Reinforcement Learning

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...