Poisoning attacks compromise the training data utilized to train machine learning (ML) models, diminishing their overall performance, manipulating predictions on specific test samples, and implanting backdoors. This article thoughtfully explores these attacks while discussing strategies to mitigate them through fundamental security principles or by implementing defensive mechanisms tailored for ML.
Machine Learning Security Against Data Poisoning: Are We There Yet?
Demontis, Ambra;Biggio, Battista;Roli, Fabio;
2024-01-01
Abstract
Poisoning attacks compromise the training data utilized to train machine learning (ML) models, diminishing their overall performance, manipulating predictions on specific test samples, and implanting backdoors. This article thoughtfully explores these attacks while discussing strategies to mitigate them through fundamental security principles or by implementing defensive mechanisms tailored for ML.File in questo prodotto:
File | Dimensione | Formato | |
---|---|---|---|
Machine_Learning_Security_Against_Data_Poisoning_Are_We_There_Yet.pdf
Solo gestori archivio
Tipologia:
versione editoriale (VoR)
Dimensione
1.12 MB
Formato
Adobe PDF
|
1.12 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
preprint-version-Machine_Learning_Security_Against_Data_Poisoning_Are_We_There_Yet.pdf
accesso aperto
Tipologia:
versione pre-print
Dimensione
1.36 MB
Formato
Adobe PDF
|
1.36 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.