Poisoning attacks compromise the training data utilized to train machine learning (ML) models, diminishing their overall performance, manipulating predictions on specific test samples, and implanting backdoors. This article thoughtfully explores these attacks while discussing strategies to mitigate them through fundamental security principles or by implementing defensive mechanisms tailored for ML.

Machine Learning Security Against Data Poisoning: Are We There Yet?

Demontis, Ambra;Biggio, Battista;Roli, Fabio;
2024-01-01

Abstract

Poisoning attacks compromise the training data utilized to train machine learning (ML) models, diminishing their overall performance, manipulating predictions on specific test samples, and implanting backdoors. This article thoughtfully explores these attacks while discussing strategies to mitigate them through fundamental security principles or by implementing defensive mechanisms tailored for ML.
2024
Computational modeling; Training data; Machine learning; Predictive models; Data models; Computer security
File in questo prodotto:
File Dimensione Formato  
Machine_Learning_Security_Against_Data_Poisoning_Are_We_There_Yet.pdf

Solo gestori archivio

Tipologia: versione editoriale (VoR)
Dimensione 1.12 MB
Formato Adobe PDF
1.12 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
preprint-version-Machine_Learning_Security_Against_Data_Poisoning_Are_We_There_Yet.pdf

accesso aperto

Tipologia: versione pre-print
Dimensione 1.36 MB
Formato Adobe PDF
1.36 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/393023
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 4
social impact