Machine learning techniques are nowadays widely used in different application domains, ranging from computer vision to computer security, despite it has been shown that they are vulnerable to well-crafted attacks performed by skilled attackers. These include evasion attacks aimed to mislead detection at test time, and poisoning attacks in which malicious samples are injected into the training data to compromise the learning procedure. Different defenses have been proposed so far. However, the majority of them is computationally expensive, and it is not clear under which attack conditions they can be considered optimal. There is moreover a lack of a security evaluation methodology that allows comparing the security of different classifiers. This thesis aims to provide a contribution to the study of machine learning system security. Through this thesis, we firstly provide an adversarial framework that can help us to perform the security evaluation of different classifiers. We exploit this provided tool to assess the security of different machine learning systems, focusing our attention on systems with limited hardware resources. Thanks to this analysis we discover an interesting relationship between sparsity and security. Then, we propose a poisoning attack that, respect to the state-of-art ones, can be exploited against a broad subset of classifiers (neural network included). Finally, we provide theoretically well-founded and efficient countermeasures, demonstrating their effectiveness on two case studies involving Android malware detection and robot vision.

Securing Machine Learning against Adversarial Attacks

DEMONTIS, AMBRA
2018-03-26

Abstract

Machine learning techniques are nowadays widely used in different application domains, ranging from computer vision to computer security, despite it has been shown that they are vulnerable to well-crafted attacks performed by skilled attackers. These include evasion attacks aimed to mislead detection at test time, and poisoning attacks in which malicious samples are injected into the training data to compromise the learning procedure. Different defenses have been proposed so far. However, the majority of them is computationally expensive, and it is not clear under which attack conditions they can be considered optimal. There is moreover a lack of a security evaluation methodology that allows comparing the security of different classifiers. This thesis aims to provide a contribution to the study of machine learning system security. Through this thesis, we firstly provide an adversarial framework that can help us to perform the security evaluation of different classifiers. We exploit this provided tool to assess the security of different machine learning systems, focusing our attention on systems with limited hardware resources. Thanks to this analysis we discover an interesting relationship between sparsity and security. Then, we propose a poisoning attack that, respect to the state-of-art ones, can be exploited against a broad subset of classifiers (neural network included). Finally, we provide theoretically well-founded and efficient countermeasures, demonstrating their effectiveness on two case studies involving Android malware detection and robot vision.
26-mar-2018
File in questo prodotto:
File Dimensione Formato  
tesi.pdf

accesso aperto

Descrizione: tesi di dottorato
Dimensione 8.64 MB
Formato Adobe PDF
8.64 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/255948
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact