In the last decades, machine learning has been widely used in security applications like spam filtering, intrusion detection in computer networks and biometric identity recognition. The adoption of such techniques has been mainly due to their high generalization capability, which allows one to identify also new kinds of attacks. However, in these applications, machine learning has to deal with intelligent and adaptive adversaries that aim to subvert its proper functioning to achieve their malicious scope. Since machine learning has not been designed to take into account the presence of attackers, it may exhibit novel, specific vulnerabilities that can be exploited in the wild. Accordingly, identifying potential vulnerabilities and proposing new design schemes for pattern recognition and machine learning techniques in adversarial environments are not only two open problems, but also two among the major goals of adversarial classification. This situation has led to an arms race between attackers and developers. In order to limit the effects of the attackers, the designers should follow a proactive approach, \ie, they should figure out how the adversary can interact with the system, in terms of points and methods of attack, and to develop appropriate countermeasures. This should enforce the attackers to spend a greater effort (in terms of time, skills, and resources) to find and exploit less intuitive vulnerabilities. In literature there are several attempts to create secure systems, based on models that allow one to characterize the adversary's behaviour with respect to a particular classifier-application scenario. However, the frameworks used to represent the attacker are customized for a specific setting, so it is difficult to readily apply them to different applications. Moreover, the adoption of the proposed robust solutions in practice is hampered by different factors, as the difficulty of meeting specific theoretical requirements, the complexity of implementation, and scalability issues, in terms of computational time and space required during training. The main goal of this work regarded the development of methods to design pattern recognition algorithms that are secure from the ground up, which can effectively cope with malicious agents. To start with, we show the usefulness of a recently proposed general threat model, which allows one to analyse the security of several application domains, applying it to a specific scenario. We show how to use this framework to analyse the security of biometric recognition systems from a novel perspective, by enabling the categorization of known and novel vulnerabilities, along with the corresponding attacks, countermeasures and defence mechanisms. Using the previous threat model as starting point to analyse systems in adversarial settings, we develop novel solutions to improve the security of several types of classifier. With respect to state-of-the-art methods, our non-trivial goal is to propose secure learning algorithms that are not computationally more demanding than their non-secure counterparts. We show that an adequate choice of the classifier's parameters, related to the specific hypothesized attack scenario, enables us to improve significantly system security.

Secure machine learning against evasion and poisoning attacks

RUSSU, PAOLO
2017-04-11

Abstract

In the last decades, machine learning has been widely used in security applications like spam filtering, intrusion detection in computer networks and biometric identity recognition. The adoption of such techniques has been mainly due to their high generalization capability, which allows one to identify also new kinds of attacks. However, in these applications, machine learning has to deal with intelligent and adaptive adversaries that aim to subvert its proper functioning to achieve their malicious scope. Since machine learning has not been designed to take into account the presence of attackers, it may exhibit novel, specific vulnerabilities that can be exploited in the wild. Accordingly, identifying potential vulnerabilities and proposing new design schemes for pattern recognition and machine learning techniques in adversarial environments are not only two open problems, but also two among the major goals of adversarial classification. This situation has led to an arms race between attackers and developers. In order to limit the effects of the attackers, the designers should follow a proactive approach, \ie, they should figure out how the adversary can interact with the system, in terms of points and methods of attack, and to develop appropriate countermeasures. This should enforce the attackers to spend a greater effort (in terms of time, skills, and resources) to find and exploit less intuitive vulnerabilities. In literature there are several attempts to create secure systems, based on models that allow one to characterize the adversary's behaviour with respect to a particular classifier-application scenario. However, the frameworks used to represent the attacker are customized for a specific setting, so it is difficult to readily apply them to different applications. Moreover, the adoption of the proposed robust solutions in practice is hampered by different factors, as the difficulty of meeting specific theoretical requirements, the complexity of implementation, and scalability issues, in terms of computational time and space required during training. The main goal of this work regarded the development of methods to design pattern recognition algorithms that are secure from the ground up, which can effectively cope with malicious agents. To start with, we show the usefulness of a recently proposed general threat model, which allows one to analyse the security of several application domains, applying it to a specific scenario. We show how to use this framework to analyse the security of biometric recognition systems from a novel perspective, by enabling the categorization of known and novel vulnerabilities, along with the corresponding attacks, countermeasures and defence mechanisms. Using the previous threat model as starting point to analyse systems in adversarial settings, we develop novel solutions to improve the security of several types of classifier. With respect to state-of-the-art methods, our non-trivial goal is to propose secure learning algorithms that are not computationally more demanding than their non-secure counterparts. We show that an adequate choice of the classifier's parameters, related to the specific hypothesized attack scenario, enables us to improve significantly system security.
11-apr-2017
File in questo prodotto:
File Dimensione Formato  
tesi_di_dottorato_paolo_russu.pdf

accesso aperto

Descrizione: tesi di dottorato
Dimensione 5.22 MB
Formato Adobe PDF
5.22 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/249561
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact