A biometric system is essentially a pattern recognition system being used in ad-versarial environment. Since, biometric system like any conventional security system is exposed to malicious adversaries, who can manipulate data to make the system ineffective by compromising its integrity. Current theory and de- sign methods of biometric systems do not take into account the vulnerability to such adversary attacks. Therefore, evaluation of classical design methods is an open problem to investigate whether they lead to design secure systems. In order to make biometric systems secure it is necessary to understand and evalu-ate the threats and to thus develop effective countermeasures and robust system designs, both technical and procedural, if necessary. Accordingly, the extension of theory and design methods of biometric systems is mandatory to safeguard the security and reliability of biometric systems in adversarial environments. In this thesis, we provide some contributions towards this direction. Among all the potential attacks discussed in the literature, spoof attacks are one of the main threats against the security of biometric systems for identity recognition. Multimodal biometric systems are commonly believed to be in-trinsically more robust to spoof attacks than systems based on a single biomet-ric trait, as they combine information coming from different biometric traits. However, recent works have question such belief and shown that multimodal systems can be misled by an attacker (impostor) even by spoofing only one of the biometric traits. Therefore, we first provide a detailed review of state-of-the-art works in multimodal biometric systems against spoof attacks. The scope ofstate-of-the-art results is very limited, since they were obtained under a very restrictive “worst-case” hypothesis, where the attacker is assumed to be able to fabricate a perfect replica of a biometric trait whose matching score distribu-tion is identical to the one of genuine traits. Thus, we argue and investigate the validity of “worst-case” hypothesis using large set of real spoof attacks and provide empirical evidence that “worst-case” scenario can not be representa- ixtive of real spoof attacks: its suitability may depend on the specific biometric trait, the matching algorithm, and the techniques used to counterfeit the spoofed traits. Then, we propose a security evaluation methodology of biometric systems against spoof attacks that can be used in real applications, as it does not require fabricating fake biometric traits, it allows the designer to take into account the different possible qualities of fake traits used by different attackers, and it exploits only information on genuine and impostor samples which is col- lected for the training of a biometric system. Our methodology evaluates the performances under a simulated spoof attack using model of the fake score distribution that takes into account explicitly different degrees of the quality of fake biometric traits. In particular, we propose two models of the match score distribution of fake traits that take into account all different factors which can affect the match score distribution of fake traits like the particular spoofed biometric, the sensor, the algorithm for matching score computation, the technique used to construct fake biometrics, and the skills of the attacker. All these factors are summarized in a single parameter, that we call “attack strength”. Further, we propose extension of our security evaluation method to rank several biometric score fusion rules according to their relative robustness against spoof attacks. This method allows the designer to choose the most robust rule according to the method prediction. We then present empirical analysis, using data sets of face and fingerprints including real spoofed traits, to show that our proposed models provide a good approximation of fake traits’ score distribution and our method thus providing an adequate estimation of the security1 of biometric systems against spoof attacks. We also use our method to show how to evaluate the security of different multimodal systems on publicly available benchmark data sets without spoof attacks. Our experimental results show that robustness of multimodal biometric systems to spoof attacks strongly depends on the particular matching algorithm, the score fusion rule, and the attack strength of fake traits. We eventually present evidence, considering a multimodal system based on face and fingerprint biometrics, that the proposed methodology to rank score fusion rules is capable of providing correct ranking of score fusion rules under spoof attacks.
Security of multimodal biometric systems against spoof attacks
MOMIN, ZAHID AKHTAR SHABBEER AHMAD
2012-03-06
Abstract
A biometric system is essentially a pattern recognition system being used in ad-versarial environment. Since, biometric system like any conventional security system is exposed to malicious adversaries, who can manipulate data to make the system ineffective by compromising its integrity. Current theory and de- sign methods of biometric systems do not take into account the vulnerability to such adversary attacks. Therefore, evaluation of classical design methods is an open problem to investigate whether they lead to design secure systems. In order to make biometric systems secure it is necessary to understand and evalu-ate the threats and to thus develop effective countermeasures and robust system designs, both technical and procedural, if necessary. Accordingly, the extension of theory and design methods of biometric systems is mandatory to safeguard the security and reliability of biometric systems in adversarial environments. In this thesis, we provide some contributions towards this direction. Among all the potential attacks discussed in the literature, spoof attacks are one of the main threats against the security of biometric systems for identity recognition. Multimodal biometric systems are commonly believed to be in-trinsically more robust to spoof attacks than systems based on a single biomet-ric trait, as they combine information coming from different biometric traits. However, recent works have question such belief and shown that multimodal systems can be misled by an attacker (impostor) even by spoofing only one of the biometric traits. Therefore, we first provide a detailed review of state-of-the-art works in multimodal biometric systems against spoof attacks. The scope ofstate-of-the-art results is very limited, since they were obtained under a very restrictive “worst-case” hypothesis, where the attacker is assumed to be able to fabricate a perfect replica of a biometric trait whose matching score distribu-tion is identical to the one of genuine traits. Thus, we argue and investigate the validity of “worst-case” hypothesis using large set of real spoof attacks and provide empirical evidence that “worst-case” scenario can not be representa- ixtive of real spoof attacks: its suitability may depend on the specific biometric trait, the matching algorithm, and the techniques used to counterfeit the spoofed traits. Then, we propose a security evaluation methodology of biometric systems against spoof attacks that can be used in real applications, as it does not require fabricating fake biometric traits, it allows the designer to take into account the different possible qualities of fake traits used by different attackers, and it exploits only information on genuine and impostor samples which is col- lected for the training of a biometric system. Our methodology evaluates the performances under a simulated spoof attack using model of the fake score distribution that takes into account explicitly different degrees of the quality of fake biometric traits. In particular, we propose two models of the match score distribution of fake traits that take into account all different factors which can affect the match score distribution of fake traits like the particular spoofed biometric, the sensor, the algorithm for matching score computation, the technique used to construct fake biometrics, and the skills of the attacker. All these factors are summarized in a single parameter, that we call “attack strength”. Further, we propose extension of our security evaluation method to rank several biometric score fusion rules according to their relative robustness against spoof attacks. This method allows the designer to choose the most robust rule according to the method prediction. We then present empirical analysis, using data sets of face and fingerprints including real spoofed traits, to show that our proposed models provide a good approximation of fake traits’ score distribution and our method thus providing an adequate estimation of the security1 of biometric systems against spoof attacks. We also use our method to show how to evaluate the security of different multimodal systems on publicly available benchmark data sets without spoof attacks. Our experimental results show that robustness of multimodal biometric systems to spoof attacks strongly depends on the particular matching algorithm, the score fusion rule, and the attack strength of fake traits. We eventually present evidence, considering a multimodal system based on face and fingerprint biometrics, that the proposed methodology to rank score fusion rules is capable of providing correct ranking of score fusion rules under spoof attacks.File | Dimensione | Formato | |
---|---|---|---|
PhD_Momin_Zahid.pdf
accesso aperto
Tipologia:
Tesi di dottorato
Dimensione
6.53 MB
Formato
Adobe PDF
|
6.53 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.