deeprl_network
This repo implements the state-of-the-art MARL algorithms for networked system control, with observability and communication of each agent limited to its neighborhood. For fair comparison, all algorithms are applied to A2C agents, classified into two groups: IA2C contains non-communicative policies which utilize neighborhood information only, whereas MA2C contains communicative policies with certain communication protocols.
Available IA2C algorithms:
PolicyInferring: Lowe, Ryan, et al. "Multi-agent actor-critic for mixed cooperative-competitive environments." Advances in Neural Information Processing Systems, 2017.
ConsensusUpdate: Zhang, Kaiqing, et al. "Fully decentralized multi-agent reinforcement learning with networked agents." arXiv preprint arXiv:1802.08757, 2018.
Available MA2C algorithms:
NeurComm: Inspired from Gilmer, Justin, et al. "Neural message passing for quantum chemistry." arXiv preprint arXiv:1704.01212, 2017.
Available NMARL scenarios:
ATSC Grid: Adaptive traffic signal control in a synthetic traffic grid.
ATSC Monaco: Adaptive traffic signal control in a real-world traffic network from Monaco city.
CACC Catch-up: Cooperative adaptive cruise control for catching up the leadinig vehicle.
CACC Slow-down: Cooperative adaptive cruise control for following the leading vehicle to slow down.
Python3
First define all hyperparameters (including algorithm and DNN structure) in a config file under [config_dir]
(examples), and create the base directory of each experiement [base_dir]
. For ATSC Grid, please call build_file.py
to generate SUMO network files before training.
To train a new agent, run
python3 main.py --base-dir [base_dir] train --config-dir [config_dir]
Training config/data and the trained model will be output to [base_dir]/data
and [base_dir]/model
, respectively.
To access tensorboard during training, run
tensorboard --logdir=[base_dir]/log
To evaluate a trained agent, run
python3 main.py --base-dir [base_dir] evaluate --evaluation-seeds [seeds]
Evaluation data will be output to [base_dir]/eva_data
. Make sure evaluation seeds are different from those used in training.
To visualize the agent behavior in ATSC scenarios, run
python3 main.py --base-dir [base_dir] evaluate --evaluation-seeds [seed] --demo
It is recommended to use only one evaluation seed for the demo run. This will launch the SUMO GUI, and view.xml
can be applied to visualize queue length and intersectin delay in edge color and thickness.
For more implementation details and underlying reasonings, please check our paper Multi-agent Reinforcement Learning for Networked System Control.
@inproceedings{ chu2020multiagent, title={Multi-agent Reinforcement Learning for Networked System Control}, author={Tianshu Chu and Sandeep Chinchali and Sachin Katti}, booktitle={International Conference on Learning Representations}, year={2020}, url={https://openreview.net/forum?id=Syx7A3NFvH} }
上一篇:DeepRL-Robot-Arm
下一篇:deeprl-baselines
还没有评论,说两句吧!
热门资源
seetafaceJNI
项目介绍 基于中科院seetaface2进行封装的JAVA...
spark-corenlp
This package wraps Stanford CoreNLP annotators ...
Keras-ResNeXt
Keras ResNeXt Implementation of ResNeXt models...
capsnet-with-caps...
CapsNet with capsule-wise convolution Project ...
shih-styletransfer
shih-styletransfer Code from Style Transfer ...
智能在线
400-630-6780
聆听.建议反馈
E-mail: support@tusaishared.com