资源算法resnext-wsl

resnext-wsl

2020-02-24 | |  44 |   0 |   0

Robustness properties of Facebook's ResNeXt WSL models

The code here can be used to reproduce the results reported in the following paper:

Orhan AE (2019) Robustness properties of Facebook's ResNeXt WSL models. arXiv:1907.07640.

All simulation results reported in the paper are provided in the results folder.

Requirements

  • torch == 1.1.0

  • torchvision == 0.3.0

  • foolbox == 1.8.0

  • ImageNet validation data in its standard directory structure.

  • ImageNet-C and ImageNet-P data in their standard directory structure.

  • ImageNet-A data in its standard directory structure.

Replication

In total, there are eight experiments reported in the paper. They can be reproduced as follows:

  1. To evaluate the ImageNet validation accuracy of the models, run evaluate_validation.py, e.g.:

python3 evaluate_validation.py /IMAGENET/DIR/ --model-name 'resnext101_32x16d_wsl'

Here and below, model-name should be one of 'resnext101_32x8d''resnext101_32x8d_wsl''resnext101_32x16d_wsl''resnext101_32x32d_wsl''resnext101_32x48d_wsl'/IMAGENET/DIR/ is the top-level ImageNet directory (it should contain a val directory containing the validation images).

  1. To evaluate the models on ImageNet-A, run evaluate_imageneta.py, e.g.:

python3 evaluate_imageneta.py /IMAGENETA/DIR/ --model-name 'resnext101_32x16d_wsl'

where /IMAGENETA/DIR/ is the top-level ImageNet-A directory.

  1. To evaluate the models on ImageNet-C, run evaluate_imagenetc.py, e.g.:

python3 evaluate_imagenetc.py /IMAGENETC/DIR/ --model-name 'resnext101_32x16d_wsl'

where /IMAGENETC/DIR/ is the top-level ImageNet-C directory.

  1. To evaluate the models on ImageNet-P, run evaluate_imagenetp.py, e.g.:

python3 evaluate_imagenetc.py /IMAGENETP/DIR/ --model-name 'resnext101_32x16d_wsl' --distortion-name 'gaussian_noise'

where /IMAGENETP/DIR/ is the top-level ImageNet-P directory, and distortion-name should be one of 'gaussian_noise''shot_noise''motion_blur''zoom_blur''brightness''translate''rotate''tilt''scale''snow'.

  1. To run black-box adversarial attacks on the models, run evaluate_blackbox.py, e.g.:

python3 evaluate_blackbox.py /IMAGENET/DIR/ --model-name 'resnext101_32x16d_wsl' --epsilon 0.06

where epsilon is the normalized perturbation size.

  1. To run white-box adversarial attacks on the models, run evaluate_whitebox.py, e.g.:

python3 evaluate_whitebox.py /IMAGENET/DIR/ --model-name 'resnext101_32x16d_wsl' --epsilon 0.06 --pgd-steps 10

where epsilon is the normalized perturbation size and pgd-steps is the number of PGD steps.

  1. To evaluate the shape biases of the models, run evaluate_shapebias.py, e.g.:

python3 evaluate_shapebias.py /CUECONFLICT/DIR/ --model-name 'resnext101_32x16d_wsl'

where /CUECONFLICT/DIR/ is the directory containing the shape-texture cue-conflict images. We provide these images in the cueconflict_images folder. They are copied from Robert Geirhos's texture-vs-shape repository (see here), but with the non-conflicting images (images with the same shape and texture category) removed.

  1. To visualize the learned features of the models, run visualize_features.py, e.g.:

python3 visualize_features.py /IMAGENET/DIR/ --model-name 'resnext101_32x16d_wsl'

Acknowledgments

The code here utilizes code and stimuli from the texture-vs-shape repository by Robert Geirhos, the robustness and natural adversarial examples repositories by Dan Hendrycks, and the ImageNet example from PyTorch. We are also grateful to the authors of Mahajan et al. (2018) for making their pre-trained ResNeXt WSL models publicly available.


上一篇:ResNext_pytorch_PretrainedModel

下一篇:trt-se-resnext

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...