资源论文HARNESSING STRUCTURES FOR VALUE -BASEDP LANNING AND REINFORCEMENT LEARNING

HARNESSING STRUCTURES FOR VALUE -BASEDP LANNING AND REINFORCEMENT LEARNING

2020-01-02 | |  53 |   34 |   0

Abstract

Value-based methods constitute a fundamental methodology in planning and deep reinforcement learning (RL). In this paper, we propose to exploit the underlying structures of the state-action value function, i.e., Q function, for both planning and deep RL. In particular, if the underlying system dynamics lead to some global structures of the Q function, one should be capable of inferring the function better by leveraging such structures. Specifically, we investigate the lowrank structure, which widely exists for big data matrices. We verify empirically the existence of low-rank Q functions in the context of control and deep RL tasks (Atari games). As our key contribution, by leveraging Matrix Estimation (ME) techniques, we propose a general framework to exploit the underlying low-rank structure in Q functions, leading to a more efficient planning procedure for classical control, and additionally, a simple scheme that can be applied to any value-based RL techniques to consistently achieve better performance on “low-rank” tasks. Extensive experiments on control tasks and Atari games confirm the efficacy of our approach.

上一篇:TRIPLE WINS :B OOSTING ACCURACY, ROBUSTNESSAND EFFICIENCY TOGETHER BY ENABLING INPUT-A DAPTIVE INFERENCE

下一篇:SIMPLE AND EFFECTIVE REGULARIZATION METHODSFOR TRAINING ON NOISILY LABELED DATA WITHG ENERALIZATION GUARANTEE

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...