Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.

Wild patterns: ten years after the rise of adversarial machine learning

Biggio, Battista
Primo
;
Roli, Fabio
Ultimo
2018-01-01

Abstract

Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.
2018
adversarial machine learning; evasion attacks; poisoning attacks; adversarial examples; secure learning; deep learning
File in questo prodotto:
File Dimensione Formato  
biggio18-pr.pdf

Open Access dal 22/07/2020

Descrizione: articolo
Tipologia: versione pre-print
Dimensione 3.85 MB
Formato Adobe PDF
3.85 MB Adobe PDF Visualizza/Apri
Wild patterns, Ten years after the rise of adversarial machine learning_2018.pdf

Solo gestori archivio

Descrizione: articolo
Tipologia: versione editoriale (VoR)
Dimensione 3.83 MB
Formato Adobe PDF
3.83 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/249332
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 873
  • ???jsp.display-item.citation.isi??? 697
social impact