Defense against Adversarial Attacks Using
High-Level Representation Guided Denoiser
Abstract
Neural networks are vulnerable to adversarial examples,
which poses a threat to their application in security sensitive systems. We propose high-level representation guided
denoiser (HGD) as a defense for image classification. Standard denoiser suffers from the error amplification effect, in
which small residual adversarial noise is progressively amplified and leads to wrong classifications. HGD overcomes
this problem by using a loss function defined as the difference between the target model’s outputs activated by the
clean image and denoised image. Compared with ensemble
adversarial training which is the state-of-the-art defending
method on large images, HGD has three advantages. First,
with HGD as a defense, the target model is more robust to
either white-box or black-box adversarial attacks. Second,
HGD can be trained on a small subset of the images and
generalizes well to other images and unseen classes. Third,
HGD can be transferred to defend models other than the
one guiding it. In NIPS competition on defense against adversarial attacks, our HGD solution won the first place and
outperformed other models by a large margin