is a Python toolbox for adversarial robustness research. The primary functionalities are implemented in PyTorch. Specifically, AdverTorch contains modules for generating adversarial perturbations and defending against adversarial examples, also scripts for adversarial training.
Latest version (v0.1)
Installation
Installing AdverTorch itself
We developed AdverTorch under Python 3.6 and PyTorch 1.0.0 & 0.4.1. To install AdverTorch, simply run
pip install advertorch
or clone the repo and run
python setup.py install
To install the package in "editable" mode:
pip install -e .
Setting up the testing environments
Some attacks are tested against implementations in Foolbox or CleverHans to ensure correctness. Currently, they are tested under the following versions of related libraries.
AdverTorch is still under active development. We will add the following features/items down the road:
more examples
support for other machine learning frameworks, e.g. TensorFlow
more attacks, defenses and other related functionalities
support for other Python versions and future PyTorch versions
contributing guidelines
...
Known issues
FastFeatureAttack and JacobianSaliencyMapAttack do not pass the tests against the version of CleverHans used. (They use to pass tests on a previous version of CleverHans.) This issue is being investigated. In the file test_attacks_on_cleverhans.py, they are marked as "skipped" in pytest tests.
License
This project is licensed under the LGPL. The terms and conditions can be found in the LICENSE and LICENSE.GPL files.
Citation
If you use AdverTorch in your research, we kindly ask that you cite the following technical report:
@article{ding2019advertorch,
title={{AdverTorch} v0.1: An Adversarial Robustness Toolbox based on PyTorch},
author={Ding, Gavin Weiguang and Wang, Luyu and Jin, Xiaomeng},
journal={arXiv preprint arXiv:1902.07623},
year={2019}
}