资源算法noreward-rl

noreward-rl

2020-01-09 | |  38 |   0 |   0

Curiosity-driven Exploration by Self-supervised Prediction

In ICML 2017 [Project Website] [Demo Video]

Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, Trevor Darrell
University of California, Berkeley

This is a tensorflow based implementation for our ICML 2017 paper on curiosity-driven exploration for reinforcement learning. Idea is to train agent with intrinsic curiosity-based motivation (ICM) when external rewards from environment are sparse. Surprisingly, you can use ICM even when there are no rewards available from the environment, in which case, agent learns to explore only out of curiosity: 'RL without rewards'. If you find this work useful in your research, please cite:

@inproceedings{pathakICMl17curiosity,
    Author = {Pathak, Deepak and Agrawal, Pulkit and
              Efros, Alexei A. and Darrell, Trevor},
    Title = {Curiosity-driven Exploration by Self-supervised Prediction},
    Booktitle = {International Conference on Machine Learning ({ICML})},
    Year = {2017}
}

1) Installation and Usage

  1. This code is based on TensorFlow. To install, run these commands:

# you might not need many of these, e.g., fceux is only for mariosudo apt-get install -y python-numpy python-dev cmake zlib1g-dev libjpeg-dev xvfb 
libav-tools xorg-dev python-opengl libboost-all-dev libsdl2-dev swig python3-dev 
python3-venv make golang libjpeg-turbo8-dev gcc wget unzip git fceux virtualenv 
tmux# install the codegit clone -b master --single-branch https://github.com/pathak22/noreward-rl.gitcd noreward-rl/
virtualenv curiositysource $PWD/curiosity/bin/activate
pip install numpy
pip install -r src/requirements.txt
python curiosity/src/go-vncdriver/build.py# download modelsbash models/download_models.sh# setup customized doom environmentcd doomFiles/# then follow commands in doomFiles/README.md
  1. Running demo

cd noreward-rl/src/
python demo.py --ckpt ../models/doom/doom_ICM
python demo.py --env-id SuperMarioBros-1-1-v0 --ckpt ../models/mario/mario_ICM
  1. Training code

cd noreward-rl/src/# For Doom: doom or doomSparse or doomVerySparsepython train.py --default --env-id doom# For Mario, change src/constants.py as follows:# PREDICTION_BETA = 0.2# ENTROPY_BETA = 0.0005python train.py --default --env-id mario --noReward

xvfb-run -s "-screen 0 1400x900x24" bash  # only for remote desktops# useful xvfb link: http://stackoverflow.com/a/30336424python inference.py --default --env-id doom --record

2) Other helpful pointers

3) Acknowledgement

Vanilla A3C code is based on the open source implementation of universe-starter-agent.


上一篇:modular-assemblies

下一篇:context-encoder

用户评价
全部评价

热门资源

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...