资源论文Variance Regularized Counterfactual Risk Minimization via Variational Divergence Minimization

Variance Regularized Counterfactual Risk Minimization via Variational Divergence Minimization

2020-03-19 | |  45 |   38 |   0

Abstract

Off-policy learning, the task of evaluating and improving policies using historic data collected from a logging policy, is important because onpolicy evaluation is usually expensive and has adverse impacts. One of the major challenge of off-policy learning is to derive counterfactual estimators that also has low variance and thus low generalization error. In this work, inspired by learning bounds for importance sampling problems, we present a new counterfactual learning principle for off-policy learning with bandit feedbacks. Our method regularizes the generalization error by minimizing the distribution divergence between the logging policy and the new policy, and removes the need for iterating through all training samples to compute sample variance regularization in prior work. With neural network policies, our end-to-end training algorithms using variational divergence minimization showed significant improvement over conventional baseline algorithms and is also consistent with our theoretical results.

上一篇:Efficient Neural Architecture Search via Parameter Sharing

下一篇:Stochastic Proximal Algorithms for AUC Maximization

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...