Abstract
Face anti-spoofing is essential to prevent face recog-nition systems from a security breach. Much of the pro-gresses have been made by the availability of face anti-spoofing benchmark datasets in recent years. However,
existing face anti-spoofing benchmarks have limited num-ber of subjects (≤170) and modalities (≤2), whichhinder the further development of the academic commu-nity. To facilitate face anti-spoofing research, we intro-duce a large-scale multi-modal dataset, namely CASIA-SURF, which is the largest publicly available dataset for face anti-spoofing in terms of both subjects and visual modalities. Specifically, it consists of 1,000subjects with21,000videos and each sample has
3modalities (i.e., RGB,Depth and IR). We also provide a measurement set, evalu-ation protocol and training/validation/testing subsets, de-veloping a new benchmark for face anti-spoofing. More-over, we present a new multi-modal fusion method as base-line, which performs feature re-weighting to select the more informative channel features while suppressing the
less useful ones for each modal. Extensive experiments have been conducted on the proposed dataset to verify
its significance and generalization capability. The dataset is available at https://sites.google.com/qq.com/chalearnfacespoofingattackdete/