资源论文Weighted importance sampling for off-policy learning with linear function approximation

Weighted importance sampling for off-policy learning with linear function approximation

2020-01-19 | |  45 |   33 |   0

Abstract

Importance sampling is an essential component of off-policy model-free reinforcement learning algorithms. However, its most effective variant, weighted importance sampling, does not carry over easily to function approximation and, because of this, it is not utilized in existing off-policy learning algorithms. In this paper, we take two steps toward bridging this gap. First, we show that weighted importance sampling can be viewed as a special case of weighting the error of individual training samples, and that this weighting has theoretical and empirical benefits similar to those of weighted importance sampling. Second, we show that these benefits extend to a new weighted-importance-sampling version of offpolicy LSTD(图片.png). We show empirically that our new WIS-LSTD(图片.png ) algorithm can result in much more rapid and reliable convergence than conventional off-policy LSTD(图片.png) (Yu 2010, Bertsekas & Yu 2009).

上一篇:Transportability from Multiple Environments with Limited Experiments: Completeness Results

下一篇:Kernel Mean Estimation via Spectral Filtering

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...