资源论文TOWARD EVALUATING ROBUSTNESS OF DEEP REIN -FORCEMENT LEARNING WITH CONTINUOUS CONTROL

TOWARD EVALUATING ROBUSTNESS OF DEEP REIN -FORCEMENT LEARNING WITH CONTINUOUS CONTROL

2020-01-02 | |  76 |   47 |   0

Abstract

Deep reinforcement learning has achieved great success in many previously difficult reinforcement learning tasks, yet recent studies show that deep RL agents are also unavoidably susceptible to adversarial perturbations, similar to deep neural networks in classification tasks. Prior works mostly focus on model-free adversarial attacks and agents with discrete actions. In this work, we study the problem of continuous control agents in deep RL with adversarial attacks and propose the first two-step algorithm based on learned model dynamics. Extensive experiments on various MuJoCo domains (Cartpole, Fish, Walker, Humanoid) demonstrate that our proposed framework is much more effective and efficient than model-free attacks baselines in degrading agent performance as well as driving agents to unsafe states.

上一篇:MAKING SENSE OF REINFORCEMENT LEARNINGAND PROBABILISTIC INFERENCE

下一篇:COMPOSING TASK -AGNOSTIC POLICIES WITH DEEPR EINFORCEMENT LEARNING

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...