In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker's knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.

Evasion attacks against machine learning at test time

BIGGIO, BATTISTA;CORONA, IGINO;MAIORCA, DAVIDE;GIACINTO, GIORGIO;ROLI, FABIO
2013-01-01

Abstract

In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker's knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.
2013
978-3-642-40993-6
File in questo prodotto:
File Dimensione Formato  
Evasion attacks against machine learning at test time.pdf

Solo gestori archivio

Dimensione 475.26 kB
Formato Adobe PDF
475.26 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
biggio13-ecml.pdf

Solo gestori archivio

Dimensione 401.93 kB
Formato Adobe PDF
401.93 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Biggio13-ecml.pdf

Solo gestori archivio

Dimensione 473.78 kB
Formato Adobe PDF
473.78 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/105260
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1107
  • ???jsp.display-item.citation.isi??? ND
social impact