资源论文Scalable Bilinear ? Learning Using State and Action Features

Scalable Bilinear ? Learning Using State and Action Features

2020-03-20 | |  56 |   38 |   0

Abstract

Approximate linear programming (ALP) represents one of the major algorithmic families to solve large-scale Markov decision processes (MDP). In this work, we study a primal-dual formulation of the ALP, and develop a scalable, model-free algorithm called bilinear 图片.png learning for reinforcement learning when a sampling oracle is provided. This algorithm enjoys a number of advantages. First, it adopts linear and bilinear models to represent the high-dimensional value function and state-action distributions, respectively, using given state and action features. Its run-time complexity depends on the number of features, not the size of the underlying MDPs. Second, it operates in a fully online fashion with out having to store any sample, thus having minimal memory footprint. Third, we prove that it is sample-efficient, solving for the optimal policy t high precision with a sample complexity linear in the dimension of the parameter space.

上一篇:Network Global Testing by Counting Graphlets

下一篇:GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...