Abstract
Face recognition has obtained remarkable progress in
recent years due to the great improvement of deep convolutional neural networks (CNNs). However, deep CNNs are
vulnerable to adversarial examples, which can cause fateful
consequences in real-world face recognition applications
with security-sensitive purposes. Adversarial attacks are
widely studied as they can identify the vulnerability of the
models before they are deployed. In this paper, we evaluate
the robustness of state-of-the-art face recognition models in
the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending
queries to the target model. This attack setting is more practical in real-world face recognition systems. To improve the
efficiency of previous methods, we propose an evolutionary
attack algorithm, which can model the local geometry of the
search directions and reduce the dimension of the search
space. Extensive experiments demonstrate the effectiveness
of the proposed method that induces a minimum perturbation to an input face image with fewer queries. We also apply the proposed method to attack a real-world face recognition system successfully.