Face presentation attack detection (PAD) has become a clear and present threat for face recognition systems and many countermeasures have been proposed to mitigate it. In these countermeasures, some of them use the features directly extracted from well-known color spaces (e.g., RGB, HSV and YCbCr) to distinguish the fake face images from the genuine ("live") ones. However, the existing color spaces have been originally designed for displaying the visual content of images or videos with high fidelity and are not well suited for directly discriminating the live and fake face images. Therefore, in this paper, we propose a deep-learning system, called CompactNet, for learning a compact space tailored for face PAD. More specifically, the proposed CompactNet does not directly extract the features in existing color spaces, but inputs the color face image into a layer-by-layer progressive space generator. Then, under the optimization of the "points-to-center" triplet loss function, the generator learns a compact space with small intra-class distance, large inter-class distance and a safe interval between different classes. Finally, the feature of the image in compact space is extracted by a pre-trained feature extractor and used for image classification. Reported experiments on three publicly available face PAD databases, namely, the Replay-Attack, the OULU-NPU and the HKBU-MARs V1, show that CompactNet separates very well the two classes of fake and genuine faces and significantly outperforms the state-of-the-art methods for PAD.
CompactNet: learning a compact space for face presentation attack detection
Roli, Fabio;
2020-01-01
Abstract
Face presentation attack detection (PAD) has become a clear and present threat for face recognition systems and many countermeasures have been proposed to mitigate it. In these countermeasures, some of them use the features directly extracted from well-known color spaces (e.g., RGB, HSV and YCbCr) to distinguish the fake face images from the genuine ("live") ones. However, the existing color spaces have been originally designed for displaying the visual content of images or videos with high fidelity and are not well suited for directly discriminating the live and fake face images. Therefore, in this paper, we propose a deep-learning system, called CompactNet, for learning a compact space tailored for face PAD. More specifically, the proposed CompactNet does not directly extract the features in existing color spaces, but inputs the color face image into a layer-by-layer progressive space generator. Then, under the optimization of the "points-to-center" triplet loss function, the generator learns a compact space with small intra-class distance, large inter-class distance and a safe interval between different classes. Finally, the feature of the image in compact space is extracted by a pre-trained feature extractor and used for image classification. Reported experiments on three publicly available face PAD databases, namely, the Replay-Attack, the OULU-NPU and the HKBU-MARs V1, show that CompactNet separates very well the two classes of fake and genuine faces and significantly outperforms the state-of-the-art methods for PAD.File | Dimensione | Formato | |
---|---|---|---|
1-s2.0-S0925231220308237-main.pdf
Solo gestori archivio
Tipologia:
versione editoriale (VoR)
Dimensione
5.05 MB
Formato
Adobe PDF
|
5.05 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.