资源论文Sample Complexity of Risk-Averse Bandit-Arm Selection

Sample Complexity of Risk-Averse Bandit-Arm Selection

2019-11-11 | |  68 |   43 |   0

Abstract We consider stochastic multiarmed bandit problems where each arm generates i.i.d. rewards according to an unknown distribution. Whereas classical bandit solutions only maximize the expected reward, we consider the problem of minimizing risk using notions such as the value-at-risk, the average value-at-risk, and the mean-variance risk. We present algorithms to minimize the risk over a single and multiple time periods, along with PAC accuracy guarantees given a ?nite number of reward samples. In the single-period case, we show that ?nding the arm with least risk requires not many more samples than the arm with highest expected reward. Although minimizing the multiperiod value-at-risk is known to be hard, we present an algorithm with comparable sample complexity under additional assumptions.

上一篇:The Inclusion-Exclusion Rule and Its Application to the Junction Tree Algorithm

下一篇:A Generalization of SAT and #SAT for Robust Policy Evaluation

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...