资源论文Temporal Difference Methods for the Variance of the Reward To Go

Temporal Difference Methods for the Variance of the Reward To Go

2020-03-02 | |  51 |   26 |   0

Abstract

In this paper we extend temporal difference policy evaluation algorithms to performance criteria that include the variance of the cumulative reward. Such criteria are useful for risk management, and are important in domains such as finance and process control. We propose variants of both TD(0) and LSTD(λ) with linear function approximation, prove their convergence, and demonstrate their utility in a 4-dimensional continuous state space problem.

上一篇:Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes

下一篇:Hierarchically-coupled hidden Markov models for learning kinetic rates from single-molecule data

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...