Face presentation attack detection (PAD) has become a clear and present threat for face recognition systems and many countermeasures have been proposed to mitigate it. In these countermeasures, some of them use the features directly extracted from well-known color spaces (e.g., RGB, HSV and YCbCr) to distinguish the fake face images from the genuine ("live") ones. However, the existing color spaces have been originally designed for displaying the visual content of images or videos with high fidelity and are not well suited for directly discriminating the live and fake face images. Therefore, in this paper, we propose a deep-learning system, called CompactNet, for learning a compact space tailored for face PAD. More specifically, the proposed CompactNet does not directly extract the features in existing color spaces, but inputs the color face image into a layer-by-layer progressive space generator. Then, under the optimization of the "points-to-center" triplet loss function, the generator learns a compact space with small intra-class distance, large inter-class distance and a safe interval between different classes. Finally, the feature of the image in compact space is extracted by a pre-trained feature extractor and used for image classification. Reported experiments on three publicly available face PAD databases, namely, the Replay-Attack, the OULU-NPU and the HKBU-MARs V1, show that CompactNet separates very well the two classes of fake and genuine faces and significantly outperforms the state-of-the-art methods for PAD. (C) 2020 Elsevier B.V. All rights reserved.

CompactNet: learning a compact space for face presentation attack detection

Roli, Fabio;
2020-01-01

Abstract

Face presentation attack detection (PAD) has become a clear and present threat for face recognition systems and many countermeasures have been proposed to mitigate it. In these countermeasures, some of them use the features directly extracted from well-known color spaces (e.g., RGB, HSV and YCbCr) to distinguish the fake face images from the genuine ("live") ones. However, the existing color spaces have been originally designed for displaying the visual content of images or videos with high fidelity and are not well suited for directly discriminating the live and fake face images. Therefore, in this paper, we propose a deep-learning system, called CompactNet, for learning a compact space tailored for face PAD. More specifically, the proposed CompactNet does not directly extract the features in existing color spaces, but inputs the color face image into a layer-by-layer progressive space generator. Then, under the optimization of the "points-to-center" triplet loss function, the generator learns a compact space with small intra-class distance, large inter-class distance and a safe interval between different classes. Finally, the feature of the image in compact space is extracted by a pre-trained feature extractor and used for image classification. Reported experiments on three publicly available face PAD databases, namely, the Replay-Attack, the OULU-NPU and the HKBU-MARs V1, show that CompactNet separates very well the two classes of fake and genuine faces and significantly outperforms the state-of-the-art methods for PAD. (C) 2020 Elsevier B.V. All rights reserved.
2020
Face PAD
Biometrics
Compact space
Deep learning
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/390447
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 18
  • ???jsp.display-item.citation.isi??? 14
social impact