资源论文A Family of Robust Stochastic Operators for Reinforcement Learning

A Family of Robust Stochastic Operators for Reinforcement Learning

2020-02-26 | |  84 |   35 |   0

Abstract

We consider a new family of stochastic operators for reinforcement learning that seeks to alleviate negative effects and become more robust to approximation or estimation errors. Theoretical results are established, showing that our family of operators preserve optimality and increase the action gap in a stochastic sense. Empirical results illustrate the strong benefits of our robust stochastic operators, significantly outperforming the classical Bellman and recently proposed operators.

上一篇:Regret Minimization for Reinforcement Learning by Evaluating the Optimal Bias Function

下一篇:A Meta-MDP Approach to Exploration for Lifelong Reinforcement Learning

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...